to config web garden for testing purposes - web-garden

I configured my web app to use Web Garden by settings the Maximum worker processes to 3.
So now when I start my application, three worker processes are fired up in IIS 7, but all the requests just go to one worker processes.
My question is there a way to force the iis to use different worker process for each request, because I want to test things like sesisons, static objects etc. work in web garden scenario.
Thanks,
Daljit

When you setup a web garden, IIS automatically handles load balancing of requests. It may appear that all the requests go to a single worker process but that is not true. The first few requests will probably go to only one process but if you perform enough requests, IIS will distribute the load to other worker processes.

Related

IIS Web Farm AppPool warm-up

I have multiple servers (2012 R2 with IIS 8.5) that have shared configuration, shared vanity URL (f5 load balanced), and host several different applications. One of the applications (an ASP.NET MVC web app) is rarely used (maybe once or twice a week) but when it needs to be used, it needs to load quickly.
I've set the AppPool to have a Start Mode of "AlwaysRunning", and a Recycling -> Regular Time Interval to 0, but it seems like every time I hit the app, it takes forever to load (like 10-20 seconds) but subsequent page requests happen instantly.
Is there another setting that I need to set to keep the app warmed up? The app has Kerberos Authentication and access is limited to one security group (that I'm not even a member of), so I can't use external PowerShell scripts to manually keep it warm.
You can check to see if the application pool is running before you hit the app.
If you click on your server name in IIS then click on "Worker Processes" you'll see all the Process ID's of the different application pools and that state.
This way you can confirm the app pool is running, before you access the application. This will help you narrow down where the problem exists.
1) Is the app pool running?
2) Is my app loaded in my app pool?
If 1 checks out, then move on to step 2 and check to see if the libraries of that application is loaded up in that Process ID.
Check your event log for application pool failures.
If you have some asynchronous initialisation/maintenance task which is started in parallel with the request or with some delay and subsequently fails, it can make the request (and some afterward) succeed but kill the application pool shortly after. This would exhibit these exact symptoms.

Ruby on Rails on few servers

I have a big application. One of the part of this is highload processing with user files. I decide to provide for this one dedicate server. There will be nginx for distribution content and some programs (non rails) for processing files.
I have two question:
What better to use on this server? (Rails or something else, maybe Sinatra)
If I'll use Rails how to deploy? I can't find any instruction. If I have one app and two servers how to deploy it and delegate task for each other?
ps I need to authorize user on both servers. In Rails I use Devise.
You can use Rails for this. If both servers will act as a web client to the end user then you'll need some sort of load balancer in front of the two servers. HAProxy does a great job on this.
As far as getting the two applications to communicate with each other, this will be less trivial than you may think. What you should do is use a locking mechanism on performing the tasks. Delayed_job by default will lock a job in the queue so that any other works will not try and work on the same job. You can use callbacks from ActiveJob to notify the user via web sockets whenever their job is completed.
Anything that will take time or calling an external API should usually be placed into a background processing queue so that you're not holding up the user.
If you cannot spin up more than the two servers, you should make one of them the master or at least have some clear roles of the two servers. For example, one server may be your background processing and memcache server while the other is storing your database and handles your web sockets.
There are a lot of different ways of configuring the services and anything including and beyond what I've mentioned is opinionated.
Having separate servers for handling tasks is my preference as it makes them easier to manage from a Sys Admin perspective. For example, if we find that our web sockets server is hammered, we can simply spin up a few more web socket servers and throw them into a load balancer pool. The end user would not be negatively impacted from your networking changes. Whereas, if you have your servers performing dual roles outside of your standard Rails installation, you may find yourself cloning and wasting resources. Each of my web servers usually also perform background tasks on low-intermediate priority queues while a dedicated server is left for handling mission critical jobs.

How to perform work after the request in MVC

I have a long running CPU bound task that I want to initialize from a link in my MVC application. When I click the link, I want the server to create a GUID to identify the job, return that GUID to the client, and perform the job after returning.
I set this up using ThreadPool.QueueWorkItem, but I've read this can be problematic in MVC. Is there a better option for this case? Is there a different approach I should be using?
In my experience it is better to perform long running CPU tasks not in ASP.NET application itself but in separate application. For example you can create separate Windows service to process tasks. To interchange data you can use for example message queue, database (probably the easiest way) or web service.
This approach has following advantages:
1) Integrity of background job. In IIS you can configure to restart worker processes periodically. If your background job is running at that moment it will be interrupted what could be undesirable.
2) Plan server load balancing. For example you can move your web service to separate server which will free web server and can provide better end user experience.
Take a look at this example to see how it can be implemented with Azure.
You can do a fire and forget, by creating an asynchronous task without waiting for it, and it will run successfully most of the time, but due IIS application life cycle management those task may be abruptly cut.
You can register an IRegisteredObject object in IIS, so IIS will such object know that the domain is being shutdown.
Please take a look to this article:
http://haacked.com/archive/2011/10/16/the-dangers-of-implementing-recurring-background-tasks-in-asp-net.aspx/

Rails: Can I run backgrounds jobs in a different server?

Is it possible to host the application in one server and queue jobs in another server?
Possible examples:
Two different EC2 instances, one with the main server and the second with the queueing service.
Host the app in Heroku and use an EC2 instance with the queueing service
Is that possible?
Thanks
Yes, definitely. We have delayed_job set up that way where I work.
There are a couple of requirements for it to work:
The servers have to have synced clocks. This is usually not a problem as long as the server timezones are all set to the same.
The servers all have to access the same database.
To do it, you simply have the same application on both (or all, if more than two) servers, and start workers on whichever server you want to process jobs. Either server can still queue jobs, but only the one(s) with workers running will actually process them.
For example, we have one interface server, a db server and several worker servers. The interface server serves the application via Apache/Passenger, connecting the Rails application to the db server. The workers have the same application, though Apache isn't running and you can't access the application through http. They do, on the other hand, have delayed_jobs workers running. In a common scenario, the interface server queues up jobs in the db, and the worker servers process them.
One word of caution: If you're relying on physical files in your application (attachments, log files, downloaded XML or anything else), you'll most likely need a solution like S3 to keep those files. The reason for this is that the individual servers might not have the actual files. An example of this is if your user were to upload their profile picture on your web-facing server, the files would likely be stored on that server. If you then have another server to resize the profile pictures, the image wouldn't exist on the worker server.
Just to offer another viable option: you can use a new worker service like IronWorker that relies completely on an elastic farm of cloud servers inside EC2.
This way you can queue/schedule jobs to run and they will parallelize across tons of threads spanning multiple servers - all without worrying about the infrastructure.
Same deal with the database though - it needs to be accessible from the outside.
Full disclosure: I helped build IW.

Ruby on Rails: How would I handle 10 concurrent users? Do I need more CPU?

Sorry if this might seem obvious. I've monitored that a web request on my Rails app uses 30-33% of CPU every time. For example, if I load a web page, then 30% of CPU is used. Does that mean that my box can only handle 3 concurrent web requests, and will stall if there are more than 3 web requests (i.e. I'll get a 100% CPU)?
If so, does that also mean that if I want to handle more than 3 concurrent web requests, then I'll have to get more servers to handle the load using a load balancer? (e.g. to handle 6 concurrent web requests, I'll need 2 servers; for 9 concurrent requests, I'll need 3 servers; for 12, I'll need 4 servers -- and so on?)
I think you should start with load tests. I wouldn't trust manual testing that much.
Load tests tell you how long the response takes for each client, and how many clients
simply time-out.
Also you will be able to measure the improvements objectively for any changes that you make.
Look at ab, or httperf; there are many other tools available.
Stephan
Your Apache or Nginx in front of the Passenger will queue requests until a Passenger worker becomes available. You can limit the number of concurrent workers so your server never stalls (but new visitors will have to wait longer until it's their turn).
It's difficult to tell based on this information. It depends very much on the web server stack you're using and which environment you're running. Different servers (Mongrel, Webrick, Apache using various mechanisms, Unicorn) all have different memory characteristics. Different environments (development vs. test vs. production) all exhibit radically different memory usage characteristics.

Resources