Is it safe to add security features to a mass-distributable website? - ruby-on-rails

I'm making a website that I'm planning on making readily deployable by users; they'll be able to take my source code and deploy it to their own server and run it as their own.
I was thinking of trying to incorporate SSL and OpenID and other features into the website. Would giving the users access to these files (such as my OpenID/Twitter/Facebook client key and secret, or the SSL certificate stuff, or whatever else..) be a potential security hazard or anything? Would they be able to do anything dangerous with this information?

SSL is not the app's concern
All client key and secret are your own responsibility... I wouldn't distribute them openly.
Normally what one does is to read this information from the environment
facebook_client_key = ENV["FACEBOOK_CLIENT_KEY"]
so the deployer has only to configure the environment, not the application.

I would steer clear of adding things like your clients keys and secrets to any files you distribute to your users. They're called secrets for a reason! I don't known the ins and outs of Facebook or Twitter's APIs but certainly with products such as Akismet, the anti-spam addon for Wordpress, the key is used to identify your particular Wordpress instance.
If you are using a Wordpress site for commerical purposes, you're supposed to pay for Akismet. The problem is that whilst you might not be using it yourself for commerical purposes, depending on what you're making and distributing that's not to say that other people won't use it for commerical purposes, and end up ruining it not just for you, but for everyone else using your software.
You should make the keys and secrets part of your application's configuration and, perhaps, provide instructions on how your users can obtain their own.

Related

Rails: What security vulnerabilities are there when storing Facebook id and secret inside an initializer

I have read that when storing the app_id and app_secret, you do not want to directly add them to the code in your initializer. That there are security vulnerabilities. Solutions like Heroku allow you to create env variables for something like this, but I want to understand what the vulnerabilities really are.
If these keys were written within my initializer code, committed to git, and pushed to a private repo on github, and deployed using Heroku. What security vulnerabilities exist?
The first concern we, as developers, think when securing our app is some hacker from the outside world being able to get data that it should not be able to get direct from our app. We should then get our app rock solid, right?
Well, even that being nearly impossible, there are at least two more ways besides a direct vulnerability on your app:
Social engineering
Enterprise network vulnerability
Social engineering: Must be the most hard leak to close, people ability to detect they are being taking into it will vary over time, and will depend in a lot of things(mood, money, ...). You're just a phone call from leaking information
Enterprise network vulnerability: The chain is as safe as its weakest link. If you make your app the unique 100% unbreakable in the know world, some still can be able to use an open door on your company network to get credentials from your app. This method is often used with social engineering, first gaining access to the intranet to proceed to your application.
To mitigate those vulnerabilities you should close as much as possible the access to your credentials, to the point that even you can't get them easy.
Adding production credentials into the repository will lead to more people being able to see it and easier to a hacker get access to it(even just sniffing your wifi network).
Putting your credentials into env vars will not be a perfect solution, but at least you will decrease the amount of people with access to it(#1) and it will transit a lot less(#2).
There is no way to being 100% secure, but you should work hard to make it as close as you can to mitigate possible flaws.

OAuth Secrets and Desktop Application

I am looking into creating a desktop app in an interpreted language that accesses Google's APIs. From what I can tell, there is a security hole. The client secret would be exposed within the code, and even if I created the application in C++ or Java, the code could be decompiled\disassembled and the secret could in theory be found. Is there anyway around that besides obfuscating the code? I'd like to be able to distribute the code for others to use.
OAuth 2.0 Threat Model and Security Considerations(rfc6819) has listed Obtaining Client Secrets as a threat.
And as Google doc Using OAuth 2.0 for Installed Applications says:
These applications are distributed to individual machines, and it is assumed that these applications cannot keep secrets.
So there are no Client "Secrets" in fact. Trying to obfuscate a secret in installed applications is a futile effort as the secrets can always be recovered using the abundance of reverse-engineering and debugging tools.
Of course, you should do your best to protect secrets but at the end, a highly motivated hacker can always get it in an installed application. So it's the value of the secret vs. difficulty of extraction. The value of the client secret is impersonating the application. It doesn't give any access to user data.
My suggestions:
Just take the risk go ahead and obfuscate it. Or you can
consider using the proxy pattern(move the secret to a web server acting as an API proxy).

OAuth provider with multiple consumer keys for single app

I'm working on an appengine app which uses OAuth. Naturally, I'm dealing with multiple versions of the app simultaneously - a local version for development, a staging version and a deployment version.
To work with these, I need three separate sets of OAuth consumer keys/secrets as the callback on authentication is defined on the provider's site.
I was wondering if there are ways for providers to provide multiple keys/secrets for a given app - this would seem to make more sense than setting up a new app each time. (Of course, it requires the provider to implement this, but it seems a natural thing to implement and I haven't seen it).
More generally, what standard approaches are used to deal with this - my guess is register multiple apps and have logic in the app to determine if it's in development mode, staging or deployment. Any thoughts welcome.
I find this to be one of the most annoying parts of being an OAuth API client developer. There is no reason why providers should not allow developers to register redirection (callback) URIs for testing.
The standard approach I've seen is to allow you to whitelist one or more domains for callback / redirection. Facebook has some crazy setup where they let you "register" multiple domains by using different domains for the various links in the application profile. I did not have much luck with that. Twitter is one of the better implementation for that, letting you register multiple domains.
In OAuth 2.0 (draft 18 or newer), this topic gets much better treatment. Registration of the full URI is recommended, with the ability to register multiple callbacks and select the one you want to you dynamically at request time.
The main aspect to consider is how you want to handle permissions with a staging setup? Do you want to be able to reuse existing approvals or want to keep those separate? Also, if the API provides special client-only calls (such as client storage or management tools), do you want the stage version to share it or keep its own (so that testing will not mess up production).
At the end, providers should provide a complete development environment and that includes testing facilities for API clients. Most don't.
From an API provider's perspective your app is simply an app using the APIs. Usually there is no such thing as a "staging" API, which does not deal with live production data. Whatever it is you are testing, you are testing it on live data right?
If you are able to register several different applications with for example different callbacks then I think your problem is pretty much solved. My view is that it should be the consumer's responsibility to keep these things separated.

Using OAuth in free/open source software

I'm now reading some introduction materials about OAuth, having the idea to use it in a free software.
And I read this:
The consumer secret must never be
revealed to anyone. DO NOT include it
in any requests, show it in any code
samples (including open source) or in
any way reveal it.
If I am writing a free client for a specific website using OAuth, then I have to include the consumer secret in the source code, otherwise making from source would make the software unusable. However, as it is said, the secret should not be release along with the source.
I completely understand the security considerations, but, how can I solve this dilemma, and use OAuth in free software?
I thought of using an external website as a proxy for authentication, but this is very much complicated. Do you have better ideas?
Edit:
Some clients like Gwibber also use OAuth, but I haven't checked its code.
I'm not sure I get the problem, can't you develop the code as open source retrieve the customer secret from a configuration file or maybe leave it in a special table in the database? That way the code will not contain the customer secret (and as such will be "shareable" as open source), but the customer secret will still be accessible to the application.
Maybe having some more details on the intended platform would help, as in some (I'm thinking tomcat right now) parameters such as this one can be included in server configuration files.
If it's PHP, I know a case of an open source project (Moodle), that keeps a php (config.php) file containing definitions of all important configurations, and references this file from all pages to get the definition. It is the responsibility of the administrator to complete the contents of this file with the values particular to that installation. In fact, if the application sees that the file is missing (usually on the first access to the site) it will redirect to a wizard where the administrator can fill the contents in a more user friendly way. In this case the customer secret will be one of these configurations, and as such will be present in the "production" code, but not in the "distributable" form of the code.
I personally like the idea of storing that value in the database in a table designed for it and possibly other parameters as the code needs not be changed. Maybe a installation wizard can be presented here ass well in the case the values do not exist.
Does this solve your problem?
If your service provider is a webapp, your server needs consumer signup pages that provides the consumer secret as the user signs up their consumer. This is the same process Twitter applications go through. Try signing up there and look at their workflow, you'll have all the steps.
If your software is peer-to-peer, each application needs to be both a service provider and a consumer. The Jira and Confluence use cases below outline that instance.
In one of my comments, I mention https://twitter.com/apps/new as the location of where Twitter app developers generate a consumer secret. How you would make such a page depends on the system architecture. If all the consumers will be talking to one server, that one server will have to have a page like https://twitter.com/apps/new. If there are multiple servers (i.e. federations of clients), each federation will need one server with this page.
Another example to consider is how Atlassian apps use OAuth. They are peer-to-peer. Setting up Jira and Confluence to talk to one another still has a setup page in each app, but it is nowhere near as complex as https://twitter.com/apps/new. Both apps are consumers and service providers at the same time. Visiting the setup in each app allows that app to be set up as a service provider with a one-way trust on the other app, as consumer. To make a two-way trust, the user must visit both app's service provider setup page and tell it the URL of the other app.

How can I secure my OAUTH secret in Phusion Passenger Sinatra app?

I have an app that uses a single-user OAUTH token. I can store the four values (consumer key/secret, token/secret) directly inside the app but that's not recommended and I don't want the secrets to be checked into source code. The app doesn't use a database. I know that however I store them, someone with access to the server could figure them out but I'd like to at least get it out of the source code. I've thought of passing them as Passenger environment variables or storing them in a separate file on the server but are there better ways? Is there any point to encrypting them since anyone that could see them would also have the access needed to decrypt?
Not having the keys stored in the source code actually is actually bad a practice in the accoding to the most agile setup (continuous deployment).
But, by what you say, you want to have two groups: those who can make the code, and those who can deploy it. Those who can deploy it have access to the keys, and, in the most secure setting, must NOT use the code of the application. You can make the oauth still work by having those who code autenticate to a system that proxies all the authorization part, and authenticates de application. Such keys (app -> auth middle man) can be in repository, as they are internal.
Any other setup: authentication library created by those who can deploy, encrypted keys, anything else can be broken by those who make the code. If you don't trust them enough to have access to the keys, you probably don't trust them enough not to try to jailbreak the keys.
The resulting deployment scheme is much more complicated, and, therefore much more prone to erros. But it is, otherwise, more secure. You still have to trust someone, like those who install the operating system, the proxy's system middleware, those who maintain the proxy's machine(s), those who can long on it, and so on. If the groupo of people with access to the keys is small enough, and you trust them, then you've gained security. Otherwise, you lost security, ability to respond to change, and wasted a lot of people's time.
Unfortunately, all authorization schemes require you to trust someone. No way around it. This is valid for any application/framework/authorization scheme, not only sinatra, rails, oauth, java, rsa signatures, elliptic curves, and so on.

Resources