Why is having a password for your database important? (for example, in Rails) - ruby-on-rails

My question is simple: How can a person access my database in production if he knows my password? I know that it can be done, because otherwise you wouldn't have to set a password for it, but I really want to know how.
Also, if someone knows the password for my database, can he execute all queries to my database (not only SELECT, but also the ones that alter the database)?

Your database is on a server, a computer just like any other. It has a MAC address, probably a NIC, and most importantly, an IP address.
If you've ever used Window's remote connection utility, you are asked for the IP address of the computer, and the login credentials for the user's account. From there, you'd open the database management system (which is simply an application running on the computer), and once you've entered the database, it's just sitting there. Just like it does for you.
The process of deleting all of your hard work, for an attacker, includes the exact same steps you would take! Pick a good password, and don't store any sensitive information on any public-facing directories on the server!

How can a person access my database in production if he knows my
password?
Through an exploit or other script where they can make a connection.
if someone knows the password for my database, can he execute all
queries to my database (not only SELECT, but also the ones that alter
the database)
They can execute whatever that account has rights to. This is a good reason that application logins only be given minimal rights. Typically in full-featured database systems, you can give the application role/account only SELECT on certain tables or views (perhaps not even all columns), and generally modify data only through stored procedures. By minimizing the surface area in this way, you have defense in depth, so not only is the account secured by a password, but the account has only a certain attack surface. This is just one part of your overall security process.

Related

How to properly handle asynchronous database replication?

I'm considering using Amazon RDS with read replicas to scale our database.
Some of our controllers in our web application are read/write, some of them are read-only. We already have an automated way for identifying which controllers are read-only, so my first approach would have been to open a connection to the master when requesting a read/write controller, else open a connection to a read replica when requesting a read-only controller.
In theory, that sounds good. But then I stumbled open the replication lag concept, which basically says that a replica can be several seconds behind the master.
Let's imagine the following use case then:
The browser posts to /create-account, which is read/write, thus connecting to the master
The account is created, transaction committed, and the browser gets redirected to /member-area
The browser opens /member-area, which is read-only, thus connecting to a replica. If the replica is even slightly behind the master, the user account might not exist yet on the replica, thus resulting in an error.
How do you realistically use read replicas in your application, to avoid these potential issues?
I worked with application which used pseudo-vertical partitioning. Since only handful of data was time-sensitive the application usually fetched from slaves and from master only in selected cases.
As an example: when the User updated their password application would always ask master for authentication prompt. When changing non-time sensitive data (like User Preferences) it would display success dialog along with information that it might take a while until everything is updated.
Some other ideas which might or might not work depending on environment:
After update compute entity checksum, store it in application cache and when fetching the data always ask for compliance with checksum
Use browser store/cookie for storing delta ensuring User always sees the latest version
Add "up-to-date" flag and invalidate synchronously on every slave node before/after update
Whatever solution you choose keep in mind it's subject of CAP Theorem.
This is a hard problem, and there are lots of potential solutions. One potential solution is to look at what facebook did,
TLDR - read requests get routed to the read only copy, but if you do a write, then for the next 20 seconds, all your reads go to the writeable master.
The other main problem we had to address was that only our master
databases in California could accept write operations. This fact meant
we needed to avoid serving pages that did database writes from
Virginia because each one would have to cross the country to our
master databases in California. Fortunately, our most frequently
accessed pages (home page, profiles, photo pages) don't do any writes
under normal operation. The problem thus boiled down to, when a user
makes a request for a page, how do we decide if it is "safe" to send
to Virginia or if it must be routed to California?
This question turned out to have a relatively straightforward answer.
One of the first servers a user request to Facebook hits is called a
load balancer; this machine's primary responsibility is picking a web
server to handle the request but it also serves a number of other
purposes: protecting against denial of service attacks and
multiplexing user connections to name a few. This load balancer has
the capability to run in Layer 7 mode where it can examine the URI a
user is requesting and make routing decisions based on that
information. This feature meant it was easy to tell the load balancer
about our "safe" pages and it could decide whether to send the request
to Virginia or California based on the page name and the user's
location.
There is another wrinkle to this problem, however. Let's say you go to
editprofile.php to change your hometown. This page isn't marked as
safe so it gets routed to California and you make the change. Then you
go to view your profile and, since it is a safe page, we send you to
Virginia. Because of the replication lag we mentioned earlier,
however, you might not see the change you just made! This experience
is very confusing for a user and also leads to double posting. We got
around this concern by setting a cookie in your browser with the
current time whenever you write something to our databases. The load
balancer also looks for that cookie and, if it notices that you wrote
something within 20 seconds, will unconditionally send you to
California. Then when 20 seconds have passed and we're certain the
data has replicated to Virginia, we'll allow you to go back for safe
pages.

Secure data transfer between servers

i got a little odd situation to develop.
The MVC web system my team has to develop (or project is made with rails), will rely on login/password from another site.
The idea is, the user will have a log-in on the third part site, and somewhere relevant, will exist a link to our site. When the user click on that link, we need receive from the site, some data of the user.
We have no control of the third part server, or direct access to their database. Plus, making then make any change to their application/infrastructure is a BIGDEAL so i am searching for a solution with less impact for then. (Of course they will have do change something but will be a political issue, so the less, the better)
From our viem, we need to be sure that the user really come from the third part site (and only from there), and we not have received a fake message from an attacker.
Their site have an valid SSL certificate working. (no idea if my system will have one (it should))
Not sure if its relevant, but we think that their server is an oracle aplication server, who connect to a oracle server in their internal network.
I first thought in using just SSL, but i not sure how to do it (what i have to check, what i have to change?) and if is safe enought.
My second thought is to use PGP keys, and make then sing and cryptography the data before sending to us, and, the link yo our site, would make a post to a control on our server which would verify and de-crypt the data.
Anyone have any tips/pointers/thoughts that could help me?
If both servers are using SSL, and supposing the server give you at least a json or xml interface, should be ok to simply make a secure request (using, for example, rest-client) and evaluating the response in your server.
Most likely you will want to cache user data on login in your server, and if user/password aren't found, look in the other server - this will reduce the load.

Best practice for assigning A/B test variation based on IP address

I am starting to write some code for A/B testing in a Grails web application. I want to ensure that requests from the same IP address always see the same variation. Rather than store a map of IP->variant, is it OK to simply turn the IP address into an integer by removing the dots, then use that as the seed for a random number generator? The following is taking place in a Grails Filter:
def ip = request.remoteAddr
def random = new Random(ip.replaceAll(/\./, '').toInteger())
def value = random.nextBoolean()
session.assignment = value
// value should always be the same for a given IP address
I know that identifying users by IP address is not reliable, and I will be using session variables/cookies as well, but this seems to be useful for the case where we have a new session, and no cookies set (or the user has cookies disabled).
You could simply take the 32-bit number and do ip mod number_of_test_scenarios. Or use a standard hashing function provided in ruby. But I feel I should point out a few problems with this approach:
If your app is behind any proxies, the ip will be the same for all the users of that proxy.
Some users will change IPs fairly frequently, more frequently than you think. Maybe (as Joel Spolsky says) "The internet is broken for those users", but I'd say it's a disservice to your customers if you make the internet MORE broken for them, especially in a subtle way, given that they are probably not in a position to do anything about it.
For users who have a new session, you can just assign the cookie on the first request and keep the assignments in memory; unless a user's initial requests go to multiple servers at the same time this should resolve that problem (it's what I do on the app I maintain).
For users with cookies disabled, I'd say "The Internet is broken", and I wouldn't go to much trouble to support that case; they'd get assigned to a default test bucket and all go there. If you plan to support many such users in a non-broken way you're creating work for yourself, but maybe that's ok. In this case you may want to consider using URL-rewriting and 302 redirects to send these users down one scenario or another. However in my opinion this isn't worth the time.
If your users can log into the site make sure you record the scenario assignments in your database and reconcile the cookie/db discrepancies accordingly.

Storing a shared key for Rails application

One of my Rails applications is going to depend on a secret key in memory, so all of its functions will only be available once administrator goes to a certain page and uploads the valid key.
The problem is that this key needs to be stored securely, so no other processes on the same machine should be able to access it (so memcached and filesystem are not suitable). One good idea would be just to store it in some configuration variable in the application, but newly spawned instances won't have access to that variable. Any thoughts how to implement this on RubyEE/Apache/mod_passenger?
there is really no way to accomplish that goal. (this is the same problem all DRM systems have)
You can't keep things secret from the operating system. Your application has to have the key somewhere in memory and the operating system kernel can read any memory location it wants to.
You need to be able to trust the operating system, which means that you then can also trust the operating system to properly enforce file access permissions. This in turn means that can store the key in a file that only the rails-user-process can read.
Think of it this way: even if you had no key at all, what is to stop an attacker on the server from simply changing the application code itself to gain access to the disabled functionality?
I would use the filesystem, with read access only to the file owner, and ensure the ruby process is the only process owned by this user. (using chmod 400 file)
You can get more complex than that, but it all boils down to using the unix users and permissions.
Encrypt it heavily in the filesystem?
What about treating it like a regular password, and using a salted hash? Once the user authenticates, he has access to the functions of the website.

Protecting user passwords in desktop applications

I'm making a twitter client, and I'm evaluating the various ways of protecting the user's login information.
Hashing apparently doesn't do it
Obfuscating in a reversable way is like trying to hide behind my finger
Plain text sounds and propably is promiscuous
Requiring the user to type in his password every time would make the application tiresome
Any ideas ?
You could make some OS calls to encrypt the password for you.
On Windows:
You can encrypt a file (on a NTFS filesystem)
Use the DPAPI from C
Use the DPAPI in .Net by using the ProtectedData class
CryptProtectData is a windows function for storing this kind of sensitive data.
http://msdn.microsoft.com/en-us/library/aa380261.aspx
For an example see how Chrome uses it:
http://blog.paranoidferret.com/index.php/2008/09/10/how-google-chrome-stores-passwords/
For Windows: encrypt the password using DPAPI (user store) and store it in your settings file or somewhere else. This will work on a per-user basis, e.g. different users on the same machine will have different unrelated encryption keys.
What platform?
On *nix, store the password in plain text in a file chmoded 400 in a subdirectory of the home directory. See for example ~/.subversion. Administrators can do anything they like to users anyway, including replacing your program with their own hacked version that captures passwords, so there's no harm in the fact that they can see the file. Beware that the password is also accessible to someone who takes out that hard drive - if this is a problem then either get the user to reenter the password each time or check whether this version of *nix has file encryption.
On Windows Pro, store the password in an encrypted file.
On Windows Amateur, do the same as *nix. [Edit: CryptProtectData looks good, as Aleris suggests. If it's available on all Windowses, then it solves the problem of only the more expensive versions supporting encrypted files].
On Symbian, store the password in your data cage. Programs with AllFiles permission are rare and supposedly trusted anyway, a bit like *nix admins.
You can't have your cake and eat it too. Either store the password (which you've ruled out), or don't and require it to be typed in every time (which you've ruled out.)
Have a good symmetric encryption scheme, it should make it difficult enough to decrypt the credentials that it won't worth trying.
Otherwise, if the service only requires the hash to be sent over the network, you can store the hast encrypted. This way even the decryption won't get the attacker closer to the solution.
However other users are true. If you store the data it can be found.
The key is finding the balance between security and usability.

Resources