Shopping cart implementation - ruby-on-rails

I want to integrate a shopping cart in my site. The cart should be such that it resets once the user signs out of the application. This can be either achieved via sessions or using the database tables.
What should be prefered out of the above two? Are there any security loop holes if this is handled via sessions?

In the security department, none of the two are prefered over the other. You should understand that both concepts are basically "sessions", but one is handled in the appdomain, the other is handled in the DB-domain.
Appdomain sessions:
Faster (No round-tripping to database)
Not scalable
Prone to concurrency problems on server farms
Sessions will be lost on server restart
Database sessions:
Slower (Roundtrips to the DB for each request)
Easier to scale on serverfarms
Sessions will be kept open on server restarts
You should consider how many users will be using your site. If you are looking at a lot, you are probably going to need multiple servers, in which case the database sessions will be your best bet, if you will stay with a single webserver / database server, then appdomain sessions will do fine.

I don't see why HttpSessions increase your security exposure - if your session is hijacked then presumably so is your DB access.
If you really intend that your user's cart should be transient then clearly your HttpSession is sufficient. Scaling app servers usually have session replication capabilities to deal with individual server failures.
I'm sceptical in the long term that such a volatile cart will always be what you want, I find it very convenient to browse around Amazon and assemble my cart, then just leave it for while. As it's probably not a great deal more work to persist your cart in a DB, I'd probably go for that.

I would use Sessions - no point of clogging up your DB on data that will be destroyed on log out.
Plus, Sessions are quite safe to use.

Related

Is there a canonical pattern for caching something related to a session on the server?

In my Rails app, once per user session, I need to have my server send a request to one of our other services to get some data about the user. I only want to make this request once per session because pinging another service every time the user makes a request will significantly slow down our response time. However, I can't store this information in a cookie client-side. This information has some security implications - if the user has the ability to lie to our server about what this piece of information is, they can gain access to data they're not authorized to see.
So what is the best way to cache or store a piece of data associated with a session on the Rails server?
I'm considering using Rails low-level caching, and I think it might even be correct:
Rails.cache.fetch(session.id, expires_in: 12.hours) do
OtherServiceAPI.get_sensitive_data(user.id)
end
I know that Rails often has one canonical way of doing things, though, so I want to be sure there's not a built-in, officially preferred way to associate a piece of data with a session. This question makes it look like there are potential pitfalls using the approach I'm considering as well, although it looks like those concerns may have been made obsolete in newer versions of Rails.
Is there a canonical pattern for what I'm trying to do? Or is the approach I'm considering idiomatic enough?

How to properly handle asynchronous database replication?

I'm considering using Amazon RDS with read replicas to scale our database.
Some of our controllers in our web application are read/write, some of them are read-only. We already have an automated way for identifying which controllers are read-only, so my first approach would have been to open a connection to the master when requesting a read/write controller, else open a connection to a read replica when requesting a read-only controller.
In theory, that sounds good. But then I stumbled open the replication lag concept, which basically says that a replica can be several seconds behind the master.
Let's imagine the following use case then:
The browser posts to /create-account, which is read/write, thus connecting to the master
The account is created, transaction committed, and the browser gets redirected to /member-area
The browser opens /member-area, which is read-only, thus connecting to a replica. If the replica is even slightly behind the master, the user account might not exist yet on the replica, thus resulting in an error.
How do you realistically use read replicas in your application, to avoid these potential issues?
I worked with application which used pseudo-vertical partitioning. Since only handful of data was time-sensitive the application usually fetched from slaves and from master only in selected cases.
As an example: when the User updated their password application would always ask master for authentication prompt. When changing non-time sensitive data (like User Preferences) it would display success dialog along with information that it might take a while until everything is updated.
Some other ideas which might or might not work depending on environment:
After update compute entity checksum, store it in application cache and when fetching the data always ask for compliance with checksum
Use browser store/cookie for storing delta ensuring User always sees the latest version
Add "up-to-date" flag and invalidate synchronously on every slave node before/after update
Whatever solution you choose keep in mind it's subject of CAP Theorem.
This is a hard problem, and there are lots of potential solutions. One potential solution is to look at what facebook did,
TLDR - read requests get routed to the read only copy, but if you do a write, then for the next 20 seconds, all your reads go to the writeable master.
The other main problem we had to address was that only our master
databases in California could accept write operations. This fact meant
we needed to avoid serving pages that did database writes from
Virginia because each one would have to cross the country to our
master databases in California. Fortunately, our most frequently
accessed pages (home page, profiles, photo pages) don't do any writes
under normal operation. The problem thus boiled down to, when a user
makes a request for a page, how do we decide if it is "safe" to send
to Virginia or if it must be routed to California?
This question turned out to have a relatively straightforward answer.
One of the first servers a user request to Facebook hits is called a
load balancer; this machine's primary responsibility is picking a web
server to handle the request but it also serves a number of other
purposes: protecting against denial of service attacks and
multiplexing user connections to name a few. This load balancer has
the capability to run in Layer 7 mode where it can examine the URI a
user is requesting and make routing decisions based on that
information. This feature meant it was easy to tell the load balancer
about our "safe" pages and it could decide whether to send the request
to Virginia or California based on the page name and the user's
location.
There is another wrinkle to this problem, however. Let's say you go to
editprofile.php to change your hometown. This page isn't marked as
safe so it gets routed to California and you make the change. Then you
go to view your profile and, since it is a safe page, we send you to
Virginia. Because of the replication lag we mentioned earlier,
however, you might not see the change you just made! This experience
is very confusing for a user and also leads to double posting. We got
around this concern by setting a cookie in your browser with the
current time whenever you write something to our databases. The load
balancer also looks for that cookie and, if it notices that you wrote
something within 20 seconds, will unconditionally send you to
California. Then when 20 seconds have passed and we're certain the
data has replicated to Virginia, we'll allow you to go back for safe
pages.

performance of ActiveRecord SessionStore

How big is the performance penalty switching from the cookieStore to the ActiveRecord SessionStore?
By default Ruby on Rails uses the CookieStore. But it has the disadvantage that the client needs to have its cookies enabled.
Switching to the Active SessionStore seems to solve that problem. I'm considering switching.
I read that performance is worse using the ActiveRecord SessionStore. But what is worse? Will a user notice this, or is it a matter of milliseconds? Anybody has seen benchmark results comparing the 2 options?
Any other reasons (not) to switch to the ActiveRecord SessionStore?
What is worse is that it needs to query a database, which then needs to calculate the answer, rather than going straight to the cookie on the client side.
However is it really that bad? You are correct in that the performance difference is very minuscule in most cases.
Pros:
Affinity-
If your web application ever expands to more than one server, moving your sessions to a database can allow you to run your servers without server affinity.
Security
- Since you only store the session ID on the client side, this reduces the chances of the user manipulating any data via the client side.
Cons
Performance - Instead of querying the database, you can just read the session/cookie data from the client side.
But the AR session store also depends on cookies - it saves the session id there.
As far as I know there is no way to make Rails sessions work with cookies disabled.

How to configure login when using multiple servers running a distributed service (HAProxy, Apache, Ruby on Rails)

I have 3 servers running a website. I now need to implement login system and I am having problems with it as user gets a different behavior (logged in or logged out) depending on the server it gets connected to.
I am using Memcache for session store in Rails -
config.action_controller.session_store = :mem_cache_store
ActiveSupport::Cache::MemCacheStore.new("server1","server2","server3")
I thought the second line will either keep caches in sync or something like that ...
Each server has its own db with 1 master, 2 slaves. I have tried going the route of doing sessions in sql store but that really hurts sql servers and replication load becomes really heavy.
Is there an easy way to say, use this Memcache for all session store on all 3 servers?
Will that solve my problem?
I will really appreciate it.
I haven't used memcached to store sessions before ( I feel like redis is a better solution ), but I think as long as you have the
ActiveSupport::Cache::MemCacheStore.new("server1","server2","server3")
line on each of your application servers, your sessions should stay synced up.
I've had a lot of success with just using regular cookie sessions using the same setup you've described.

High Availability ASP.NET MVC

When building an ASP.NET MVC application with a goal of high availability, is it a good practice to keep the session state on the SQL Server, if there is no state server available?
The point here realy is that you have 2-3 webservers like you mentioned in the comment to Craigs answer.
One way is to use SQL-server sessionstate which has its own problems http://idunno.org/articles/277.aspx.
If you have this one SQL-Server I would be carefull, because the DB for sessionstate will put heavy load on it. Each request will write to the db.
We use 2 webservers and a Loadbalancer that has sticky sessions. If your first request ends up in server 1 then all your requests are handled by server 1. (Its a bit more sophisticated but you get the idea.)
This might not allways be the best solution, but at least on our site (its a shop where user typically stay 20-30minutes) it works well. We use only little SessionState and have most of the userspecific stuff stored by the ProfileSystem. But I guess the ProfileSystem will also fail if requests go to different servers.
I'd suggest AppFabric Caching (f.k.a. Velocity) instead.

Resources