I have an app that uses a single-user OAUTH token. I can store the four values (consumer key/secret, token/secret) directly inside the app but that's not recommended and I don't want the secrets to be checked into source code. The app doesn't use a database. I know that however I store them, someone with access to the server could figure them out but I'd like to at least get it out of the source code. I've thought of passing them as Passenger environment variables or storing them in a separate file on the server but are there better ways? Is there any point to encrypting them since anyone that could see them would also have the access needed to decrypt?
Not having the keys stored in the source code actually is actually bad a practice in the accoding to the most agile setup (continuous deployment).
But, by what you say, you want to have two groups: those who can make the code, and those who can deploy it. Those who can deploy it have access to the keys, and, in the most secure setting, must NOT use the code of the application. You can make the oauth still work by having those who code autenticate to a system that proxies all the authorization part, and authenticates de application. Such keys (app -> auth middle man) can be in repository, as they are internal.
Any other setup: authentication library created by those who can deploy, encrypted keys, anything else can be broken by those who make the code. If you don't trust them enough to have access to the keys, you probably don't trust them enough not to try to jailbreak the keys.
The resulting deployment scheme is much more complicated, and, therefore much more prone to erros. But it is, otherwise, more secure. You still have to trust someone, like those who install the operating system, the proxy's system middleware, those who maintain the proxy's machine(s), those who can long on it, and so on. If the groupo of people with access to the keys is small enough, and you trust them, then you've gained security. Otherwise, you lost security, ability to respond to change, and wasted a lot of people's time.
Unfortunately, all authorization schemes require you to trust someone. No way around it. This is valid for any application/framework/authorization scheme, not only sinatra, rails, oauth, java, rsa signatures, elliptic curves, and so on.
Related
So I have a standard Rails app running on ec2 that needs access to s3. I am currently doing it with long-term access keys, but rotating keys is a pain, and I would like to move away from this. It seems I have two alternative options:
One, tagging the ec2 instance with a role with proper permissions to access the s3 bucket. This seems easy to setup, yet not having any access keys seems like a bit of a security threat. If someone is able to access a server, it would be very difficult to stop access to s3. Example
Two, I can 'Assume the role' using the ruby SDK and STS classes to get temporary access keys from the role, and use them in the rails application. I am pretty confused how to set this up, but could probably figure it out. It seems like a very secure method, however, as even if someone gets access to your server, the temporary access keys make it considerably harder to access your s3 data over the long term. General methodology of this setup.
I guess my main question is which should I go with? Which is the industry standard nowadays? Does anyone have experience setting up STS?
Sincere thanks for the help and any further understanding on this issue!
All of the methods in your question require AWS Access Keys. These keys may not be obvious but they are there. There is not much that you can do to stop someone once they have access inside the EC2 instance other than terminating the instance. (There are other options, but that is for forensics)
You are currently storing long term keys on your instance. This is strongly NOT recommended. The recommended "best practices" method is to use IAM Roles and assign a role with only required permissions. The AWS SDKs will get the credentials from the instance's metadata.
You are giving some thought to using STS. However, you need credentials to call STS to obtain temporary credentials. STS is an excellent service, but is designed to for handing out short term temporary credentials to others - such as the case where your web server is creating credentials via STS to hand to your users for limited case use such as accessing files on S3 or sending an email, etc. The fault in your thinking about STS is that once the bad guy has access to your server, he will just steal the keys that you call STS with, thereby defeating the need to call STS.
In summary, follow best practices for securing your server such as NACLs, security groups, least privilege, minimum installed software, etc. Then use IAM Roles and assign the minimum privileges to your EC2 instance. Don't forget the value of always backing up your data to a location that your access keys CANNOT access.
I have read that when storing the app_id and app_secret, you do not want to directly add them to the code in your initializer. That there are security vulnerabilities. Solutions like Heroku allow you to create env variables for something like this, but I want to understand what the vulnerabilities really are.
If these keys were written within my initializer code, committed to git, and pushed to a private repo on github, and deployed using Heroku. What security vulnerabilities exist?
The first concern we, as developers, think when securing our app is some hacker from the outside world being able to get data that it should not be able to get direct from our app. We should then get our app rock solid, right?
Well, even that being nearly impossible, there are at least two more ways besides a direct vulnerability on your app:
Social engineering
Enterprise network vulnerability
Social engineering: Must be the most hard leak to close, people ability to detect they are being taking into it will vary over time, and will depend in a lot of things(mood, money, ...). You're just a phone call from leaking information
Enterprise network vulnerability: The chain is as safe as its weakest link. If you make your app the unique 100% unbreakable in the know world, some still can be able to use an open door on your company network to get credentials from your app. This method is often used with social engineering, first gaining access to the intranet to proceed to your application.
To mitigate those vulnerabilities you should close as much as possible the access to your credentials, to the point that even you can't get them easy.
Adding production credentials into the repository will lead to more people being able to see it and easier to a hacker get access to it(even just sniffing your wifi network).
Putting your credentials into env vars will not be a perfect solution, but at least you will decrease the amount of people with access to it(#1) and it will transit a lot less(#2).
There is no way to being 100% secure, but you should work hard to make it as close as you can to mitigate possible flaws.
I'm having trouble determining what the best approach is for the following scenario.
My application POST's to my web service.
POST URL includes several parameters, including device info + a shared secret
The device is stored in my database IF the shared secret is correct
At the moment, this shared secret is hard-coded in the app and the connection to my web service is over SSL.
This obviously limits people from finding out the shared secret and abusing my web service.
However this approach isn't as secure as I'd like, due to the possibility of decoding my app etc and getting the secret.
Is there a better way of doing this, as opposed to the shared secret approach?
With local keys almost every security approach can be leaked by somebody somehow. This does of course not mean that we do not need to put any effort in at all.
If people download your app the can possible further investigate the code by reengineering and or refactoring
However, if there is no other way than putting the secret key within your apps binary, you're left with a (weaker) alternative often called security through obscurity.
There are many ways to do this and you can probably find a lot of discussion on the internet about this topic so here are just some ideas:
Split the key to multiple classes and throughout your code
Disguise your key as string which will could be used in a normal way within your app
Hash some data or code segments on startup and include them in your key
Use all of the methods named above together
There are even some frameworks out there like UAObfuscatedString which might help you implementing your logic.
Keep in mind, the best way is always to not hardcode a secret key within your apps binary but somehow "load" the secret from your server who e.g. calculates the key…
I'm making a website that I'm planning on making readily deployable by users; they'll be able to take my source code and deploy it to their own server and run it as their own.
I was thinking of trying to incorporate SSL and OpenID and other features into the website. Would giving the users access to these files (such as my OpenID/Twitter/Facebook client key and secret, or the SSL certificate stuff, or whatever else..) be a potential security hazard or anything? Would they be able to do anything dangerous with this information?
SSL is not the app's concern
All client key and secret are your own responsibility... I wouldn't distribute them openly.
Normally what one does is to read this information from the environment
facebook_client_key = ENV["FACEBOOK_CLIENT_KEY"]
so the deployer has only to configure the environment, not the application.
I would steer clear of adding things like your clients keys and secrets to any files you distribute to your users. They're called secrets for a reason! I don't known the ins and outs of Facebook or Twitter's APIs but certainly with products such as Akismet, the anti-spam addon for Wordpress, the key is used to identify your particular Wordpress instance.
If you are using a Wordpress site for commerical purposes, you're supposed to pay for Akismet. The problem is that whilst you might not be using it yourself for commerical purposes, depending on what you're making and distributing that's not to say that other people won't use it for commerical purposes, and end up ruining it not just for you, but for everyone else using your software.
You should make the keys and secrets part of your application's configuration and, perhaps, provide instructions on how your users can obtain their own.
I'm making a twitter client, and I'm evaluating the various ways of protecting the user's login information.
IMPORTANT: I need to protect the user's data from other other applications. For example imagine what happens if a bot starts going around stealing Twhirl passwords or Hotmail/GMail/Yahoo/Paypal from applications that run on the user's desktop.
Clarification: I asked this before without the 'important' portion but stackoverflow's UI doesn't help with adding details later inside the Q/A conversation.
Hashing apparently doesn't do it
Obfuscating in a reversable way is like trying to hide behind my finger
Plain text sounds and propably is promiscuous
Requiring the user to type in his password every time would make the application tiresome
Any ideas ?
This is a catch-22. Either you make the user type in his password every time, or you store it insecurely (obfuscated, encrypted, whatever).
The way to fix this is for more operating systems to incorporate built-in password managers - like OS X's Keychain. That way you just store your password in the Keychain, the OS keeps it secure, and the user only has to type in 1 master password. Lots of applications (like Skype) on OS X use Keychain to do exactly what you are describing.
But since you are probably using Windows, I'd say just go with some obfuscation and encryption. I think you may be slightly paranoid about the password-stealing-bots; if your application doesn't have a large userbase, odds are pretty low that someone will target it and specifically try to steal the passwords. Besides that, they would also have to have access to their victim's filesystem. If that's the case, they probably have a virus/worm and have bigger problems.
I think you are missing the bigger picture here:
If the desktop is compromised, you're F#*%ED!
To steal a password from your program, a virus would have to be running on the system as administrator. If the virus has achieved that, stealing passwords from your program is way down on it's list of malicious things it wants to do.
Store it in plain text and let the user know.
That way, there are no misconceptions about what level of security you have achieved. If users start complaining, consider xor'ing a published-on-your-website constant onto it. If users keep complaining, "hide" the constant in your code and tell them it's bad security.
If users can't keep bad people out of the box, then in effect all secret data they have is known to Dr. Evil. Doesn't matter whether it's encrypted or not. And if they can keep evil people out, why worry about storing passwords in plain text?
I could be talking out my ass here, of course. Is there a study showing that storing passwords in plain text results in worse security than storing them obfuscated?
If you are making a twitter client then use their API
Twitter has very good documentation, so I advise you read it all before making a client. The most important part in relation to this question is that you don't need to store the passwords, store the OAuth token instead. You need to use the xAuth stage to get the OAuth token, then use other Twitter API's with this OAuth token where necessary.
xAuth provides a way for desktop and mobile applications to exchange a
username and password for an OAuth access token. Once the access token
is retrieved, xAuth-enabled developers should dispose of the login and
password corresponding to the user.
You never store passwords if you can get away with it
Using OAuth the worst that can happen is a 3rd party (black hat hacker) gets access to that Twitter account but not the password. This will protect users which naively use the same password for multiple on-line services.
Use a keychain of some sort
Finally I agree that pre-made solutions such as OSX's keychain should be used to store the sensitive OAuth information, a compromised machine would only reveal the information of the currently unlocked keychains. This means in a multi user system only the logged in users have their keychains become vulnerable.
Other damage limitations
There may be stuff that I've missed take a Google for "best security practices" and start reading for what may be relevant.
EDIT (in response to finnw desired general case solution)
You want, given no user input, access to an on-line service. This means typically you have, at most, user level access control to the authentication credentials via something like Keychain.
I have never used OSX Keychain so now I'll talk about SELinux. In SELinux you can also ensure these authentication credentials would only given to your program. And if we are continue going on OS level stuff, you could also sign all processes from boot to cryptographicly be certain no other program could be mimicking your program. This is all beyond a typical user system and given this level of setup you can be assured the user is not naive enough to be compromised, or a sysadmin is compitant enough. At this level we could protect those credentials.
Lets assume we don't go that far into protecting those credentials, then we can assume the system is compromised. At this point the authentication credentials become compromised, obfuscation/encryption of these credentials on the local side don't add any real security and neither does storing part or all of it on a 3rd party server. This is easy to see because given no user input, your program needs to bootstrap itself to obtain those credentials. If your program can do it with no input, then so can anyone who has reversed engineered your obfuscation/encryption/server protocol.
At this point it is damage limitation, don't store the password as authentication credentials. Use OAuth, cookie sessions, salted hashs, etc, they are all just tokens representing that at some point in the past you proved you knew the password. In any good system these tokens can be revoked, time expired and/or periodical exchanged for a new token during active session.
The token (whatever form it may be) can also contain additional non user input authentication information which restricts your ability to use them elsewhere. It could for example encapsulate your hostname and/or IP address. This makes it difficult to use the credentials on a different machines since mimicking these forms of credentials would require access to the appropriate level of network infrastructure.
Upon further contemplation I think I found a way. I will use ASP.net authentication for my application desktop application, store their credentials online and let Internet Explorer's password manager handle the local caching of this secondary pair or credentials for me.
I will just have to have them authenticate through a Facebook-API like form during the first login.
I don't get it... why is encryption no good? Use a large key and store the key in the machine key store (assuming Windows). Done, and done.
OSX: Use the Keychain
Windows: Use CryptProtectData and CryptUnprotectData
Linux: Use GNOME Keyring and KDE KWallet