Evaluating AppFabric and have some newbie questions like why does setting up AppFabric require a database or xml? - asp.net-mvc

I'm new to AppFabric and I'm evaluating a distributed cache solution for a production environment and I'm in a Microsoft shop using Asp.net MVC and WebApi, and we are not using Windows Azure.
In setting up AppFabric on my local computer, there was a step to create a database or use xml, I'd like to understand the concept here. Is AppFabri depending on a datasource (db or xml files) to persist? If so, wouldn't this be a potential bottleneck?
Also, could anyone who is using AppFabric right now on their production servers comment on their experience using it? Any pitfalls or gotchas?
Thanks, really appreciate it!

I can give you my experience from a little over a year ago after deploying AppFabric in a production environment. I haven't kept up with it since because my experience wasn't great. Maybe they've fixed some of the issues we had.
The step to create a database or XML is just to store configuration information for the cluster.
My notes (remember this was current a year ago, so maybe things have changed):
When caching an object from C#, the object was turned into XML and stored. It was a verbose format and made storing/getting a little slower than it should have been. I would have rather had the object serialized to a binary format - or compressed - or just anything other than uncompressed XML. We actually modified our objects to have shorter property names when cached since they turned into XML. This caused some objects to go from 1MB down to a couple hundred kilobytes.
AppFabric was communicating over the NetTcp protocol in Windows and this caused us some grief. We had some servers without the Windows service installed (NetTcp) and that caused headaches. We couldn't figure out why AppFabric worked on one machine and not another.
It did seem to do well with distributing the load across multiple machines in the cluster. It also seemed to retrieve stuff fast and the expiration logic always worked fine.
At the time it was just a fairly immature product. We couldn't find any support anywhere for it. I remember being on the phone with Microsoft trying to track down some issues we had and every time the person would say, "AppFabric? What is that?" The community around it at the time was non-existent. (This was really painful for us.)
If I had to do an application for Windows that needed a distributed cache I would have to re-evaluate AppFabric. My first experience wasn't the greatest. Now I'd think of Redis, Couchbase, Memcached - in that order.

Related

Rails turn feature on/off on the fly

I am a newbie to rails. I have used feature flags when i was in java world. I found that there are a few gems in rails (rollout and others) for doing it. But how to turn a feature on/off on the fly in rails.
In java we can use a mbean to turn features on the fly. Any idea or pointers on how to do this? I dont want to do a server restart on my machines once a code is deployed.
Unless you have a way of communicating to all your processes at once, which is non-standard, then you'd need some kind of centralized configuration system. Redis is a really fast key-value store which works well for this, but a database can also do the job if a few milliseconds per page load to figure out which features to enable isn't a big deal.
If you're only deploying on a single server, you could also use a static YAML or JSON configuration file that's read before each request is processed. The overhead of this is almost immeasurable.

Multiple redmine instances best practices

I'm studying the best way to have multiple redmine instances in the same server (basically I need a database for each redmine group).
Until now I have 2 options:
Deploy a redmine instance for each group
Deploy one redmine instance with multiple database
I really don't know what is the best practice in this situation, I've seen some people doing this in both ways.
I've tested the deployment of multiple redmines (3 instances) with nginx and passenger. It worked well but I think with a lot of instances it may not be feasible. Each app needs around 100mb of RAM, and with the increasing of requests it tends to allocate more processes to the app. This scenario seems bad if we had a lot of instances.
The option 2 seems reasonable, I think I can implement that with rails environments. But I think that there are some security problems related with sessions (I think a user of site A is allowed to make actions on site B after an authentication in A).
There are any good practice for this situation? What's the best practice to take in this situation?
Other requirement related with this is: we must be able to create or shut down a redmine instance without interrupt the others (e.g. we should avoid server restarts..).
Thanks for any advice and sorry for my english!
Edit:
My solution:
I used a redmine instance for each group. I used nginx+unicorn to manage each instance independently (because passenger didn't allow me to manage each instance independently).
The two options are not so different after all. The only difference is that in option 2, you only have one copy of the code on your disk.
In any case, you still need to run different worker processes for each instance, as Redmine (and generally most Rails apps) doesn't support database switching for each request and some data regarding a certain environment are cached in process.
Given that, there is not really much incentive to share even the codebase as it would require certain monkey patches and symlink-magic to allow the proper initialization for the intentional configuration differences (database and email configuration, paths to uploaded files, ...). The Debian package does that but it's (in my eyes) rather brittle and leads to a rather non-standard system.
But to stress again: even if you share the same code on the disk between instances, you can't share the running worker processes.
Running multiple instances from the same codebase is not officially supported by Redmine. However, Debian/Ubuntu packages seem to support such approach... See:
Multiple instances of redmine on Debian squeeze
So, generally:
If you use Debian/Ubuntu go with option #2
Otherwise go with #1
Rolling forward a couple of years, and you might now want to consider a third option of using docker containers for each of your redmine instances.
I've been using https://github.com/sameersbn/docker-redmine.git , and have been quite happy with it except that it doesn't yet support handling of incoming mail for creating and commenting on tickets.

issues with deploying rails servers to multiple hosts

I've heard often that deploying a traditional monolithic Rails app (i.e. no internal Web API, no message queue, no Redis/memcached server) to multiple servers can produce a bunch of bugs that are very hard to debug but I'm having a hard time coming up with some concrete examples despite a few hours of googling
Some obvious issues that I can think of are:
Observers - likely will not work properly as the observation is only propagated on one server and not all of them (assuming there is no Message Queue)
Sessions - would probably need to store these in the database which would need it's own host
Caches - any sweepers would have issues propagating invalidations between servers.
Anyone else care to contribute? I'd really appreciate any articles others may have come across or just general wisdom :)
Observers are just code callbacks.
They run on each process, on each server.
Sessions have defaulted to the cookie store for the last few years.
So multiple servers are no problem.
If you don't have enough space in your cookie then I suggest you may be doing something wrong.
Cache invalidation is indeed a problem.
But it always is.
One solution is to break your cache out into a standalone service.
Sites like Facebook have giant farms of memcache
I think scaling and clustering is always a hard problem.
But this seems to be an old argument against rails.
If anything the last few years have seen rails shine in this respect.
With ec2, nosql, and server automation becoming quite a norm in the community.

MVC3 site running fast locally, very slow on live host

So I've been running into some speed issues with my site which has been online for a few weeks now. It's an MVC3 site using MySQL on discountasp.net.
I cleaned up the structure of the site and got it working pretty fast on my local machine, around 800-1100ms to load with no caching. The strange thing is when I try and visit the live site I get times of around 15-16 seconds, sometimes freezing up as long as 30 seconds. I switched off the viewstate in web.config and now the local loads in 1.3 seconds (yes, oddly a little longer) and the live site is down to 8-9 seconds most of the time, but that's still pretty poor.
Without making this problem to specific to my case (since there can be a million reasons sites go slow), I am curious if there are any reasons why the load times between the local Visual Studio sever or IIS Express would run so fast while the live site would run so slow. Wouldn't anything code wise or dependency wise effect both equally? I just can't think of a reason that would affect the live site but not the local.
Any thoughts?
Further thoughts: I have the site setup as a sub-folder which I'm using IIS URL Rewriting to map to a subdomain. I've not heard of this causing issues before, but could this be a problem?
Further Further Updates: So I uploaded a simple page that does nothing but query all the records in the largest table I have with no caching. On my local machine it's averages around 110ms (which still seems slow...), and on the live site it's usually over double the time. If I'm hitting the database several times to load the page, it makes sense that this would heavily affect the page load time. I'm still not sure if the issue is with LINQ or MySQL or MVC in general (maybe even discountasp.net).
I had a similar problem once and the culprit was the initialization of the user session. Turns out a lot of objects were being read/write to the session state on each request, but for some reason this wasn't affecting my local machine (I probably had InProc mode enabled locally).
So try adding an attribute to some of your controllers and see if that speeds things up:
[SessionState(SessionStateBehaviour.Disabled)]
public class MyController : Controller
{
On another note, I ran some tests, and surprisingly, it was faster to read some of those objects from the DB on each request than to read them once, then put them in the session state. That kinda makes sense, since session state mode in production was SqlServer, and serialization/deserialization was apparently slower than just assigning values to properties from a DataReader. Plus, changing that had the nice side-effect of avoiding deserialization errors when deploying a new version of the assembly...
By the way, even 992ms is too much, IMHO. Can you use output caching to shave that off a bit?
So as I mentioned above, I had caching turned off for development, but only on my local machine. What I didn't realise was there was a problem WITH the caching which was turned on for the LIVE server, which I never turned off because I thought it was helping fix the slow speeds! It all makes sense now :)
Fixing my cache issue (IQueryable<> at the top of a dataset that was supposed to cache the entire table.. >_>) my speeds have increased 10 fold.
Thanks to everyone who assisted!

Should I choose cloud?

I'm about to start development on a project with very uncertain load/traffic specifics. When it will be released there will certainly be very low load that can easily be handled by a single desktop quad code machine.
The problem is that there will be (after some invite-only period) a strong publicity for the product so I expect considerable traffic/load peaks.
I haven't read enough about cloud providers and I'm mostly leaning toward Amazon or Azure for the credibility these two companies have without checking them out as I should with others (ie. Rackspace that I suppose is also a cloud service provider).
What I want
I would like to create a normal Asp.net MVC web application that can be run on in-house single machine low-cost server. It would run web server along with database (relational and maybe also document) and fulltext search (not SQL FTS but rather high speed separate product like Lucene or Sphinx). But after initial invite-only period I'd like to move this app to the cloud to make it more traffic/load demand-friendly.
As much as I know Amazon offers a sort of virtual machine hosting which I understand you setup as a normal server but has possible flexible resources in terms of load power. I'm not sure if that can be accomplished on Azure as well.
Questions
What is your experience with application transition to cloud and which one did you choose and why?
What would you recommend I should think about when designing/developing the solution to make the transition as painless as possible.
Based on your experience is it better to move to the cloud (financial wise) or is it better to buy your own servers and load balance application yourself and maybe save money on the long run?
"Cloud" is such a vague term. Still, I think this is a very good question.
Basically, IaaS cloud hosting does not magically make your application scale. It's really a virtual private server with very short contract / cancellation periods.
For scalability, the magic lies not so much in the hosting, but in the horizontal scalability of the application code itself. This is related to all the distributed computing challenges. For example, adding more application servers is not always easy: you must be sure that you don't persist any user state in the server application (but rather in a database, static can be evil), caching can be problematic because local caches can make the situation worse if you're using a round-robin strategy, etc.
What is your experience with application transition to cloud and which one did you choose and why?
What would you recommend I should think about when designing/developing the solution to make the transition as painless as possible.
You don't really have to do anything different just to host on EC2 or Azure -- basically. But of course, it's not that easy when things grow.
For instance, EC2 instance storage is rather limited. Additional storage on EBS, however, does not provide comparable performance characteristics and can be a bit more laggy than a disk. The point here is that EBS does magically scale, and it's probably more PaaS than IaaS; but it's not a simple hard disk and it does, consequently, not behave like a hard drive. I don't know about Azure block storage. In general, expect additional abstraction layers to introduce problems of their own, no matter what they do.
Based on your experience is it better to move to the cloud (financial wise) or is it better to buy your own servers and load balance application yourself and maybe save money on the long run?
Typical cloud providers are more expensive than the usual 'round-the-corner VPS providers, but they are, to my experience, also much more reliable and professional. EC2 has a free tier (but it's quite small), Azure gives you a small instance for free for 3 months.
Doing the calculation right is rather tricky; for example, if you have to shut down your service for whatever reason, it's nice to be able to cancel now rather than pay another year - you might want to put this risk into your calculation. On the other hand, both EC2 and Azure will be considerably cheaper if you sign up for 6 or 12 months, rather than paying by the hour.
You might want to check out the free Azure plan, because it's nice to start fiddling around without any cost. A big advantage of cloud providers is that you can scale vertically very easily: buying a 16 core, 64GB RAM server machine is really expensive, but if there's so much traffic on your site, upgrading your plan won't be such a big issue.
As someone hasn't mention it yet...
AppHarbor has been amazing. You can push stuff in a matter of minutes. Deployment is a breeze. And setting up your project for it is easier as well. And it doesn't even require any major changes in your solution to fit in.
For the full-text search, you might consider something like Websolr.
A lot of this depends on what your app is doing (e.g., are there separable components that might benefit from running on different instances, vs. a simple CRUD application with a front end). One thing to consider is that in a cloud application you normally don't have a traditional relational database. As such, you have to choose either cloud or traditional hosting, or plan on coding your access layer twice. Azure does have relational databases (SQL Azure), although they're not identical to SQL Server 2008R2. You're going to have to research the pros/cons of a cloud setup for your specific situation.
As far as financial concerns, it's usually a lot cheaper to just get an account with a hosting company instead of a cloud service, since you pay by the month, instead of the hour (last time I checked an account with Azure running 24/7 for a month would cost about $40-$50, while you can get hosting for $15 a month). The savings with the cloud come in when you have to run several servers, and the cost of maintaining them surpasses the cost of the instance on the cloud platform.
So, sorry, there's no silver bullet answer for you. Read up on the different services available. Consider what your application needs, what prices will be, and go from there.
I have just migrated an MVC-based application from a dedicated server to Azure. When migrating the MSSQl-database, I first tried importing .bacpak files but some of the tables failed because of their size. I then used the SQL Database migratio wizard which worked fine for small tables but failed for tables with BLOB-fields. For these tables I had to use temporary intermediate tables. Then after a while after all the data was transferred setting up the Webapp was a breeze and we went in production. At first, everything seemed to work just fine, but after a couple of hours when the load got heavier, all kind of errors occurred. I went into the Azure portal and it was really easy to see the

Resources