I'm currently using Workling with Starling on a rails app. Although I like Workling, I find it kinda hard to monitor.
To make matters worse, I have a couple of Workling instances. Workling is running with the :multiple set to 'true' (inside workling_client).
I can see the pid for each instance and such, but I want to know if they're actually doing some work, and find out if I need more (or even less) instances running.
Do you guys have any suggestions of tools, hacks or anything that could help me on this one?
I monitor Workling with monit. It gives you CPU% at any given point in time. If you want to have a look how much load they have had over time then you could use Munin instead as I believe it can give you some graphs via some plugins to it that can tell you a number of things about what you are monitoring. Sorry I can't be more specific as I haven't used Munin.
Related
I'm a bit overwhelmed by mere amount a possible solutions the Rails community has created for my problem. So perhaps anyone can help me to figure out how to solve it best.
What I want to do is to write a Rails app that behaves kind of "dropbox". On the one hand it should be a web interface where I can upload and download files to my web server. This interacts with my database and all that stuff. On the other hand I have SSH access to that server and can put files there manually. Now I want this file system actions to trigger my Rails app to do the things it would do if I'd created the file via the web interface.
So I somehow write a daemon, right? There are a lot of solutions, like
daemons.rubyforge.org/
github.com/mirasrael/daemons-rails
github.com/costan/daemonz
github.com/kennethkalmer/daemon-kit
Another feature that I would like to have, is that my Rails app automatically spawns and stops my daemon as start or quit my Rails app resp. So "daemonz" seems the best solution. But as I googled further I found
github.com/FooBarWidget/daemon_controller/
which seems a lot more "high tech" and already used as I deploy with passenger. But I don't understand if it kills my daemons as I quit Rails. I suppose that is not the case and so I wonder how to implement this in my app.
The way to implement a "thing" to react to file system changes seems straight forward for me. I'd use
github.com/guard/listen/
(an alternative would be: github.com/ttilley/fssm )
But what I don't understand as this the first time I'm really faced with this protocol things is, if this spawns a server I'm able to communicate with or what kind of object I have to deal with.
The last thing, I would like to implement is a kind of worker queue so that the listening for file system changes is seperated from the the actions of my rails app. But there are so many solutions that I'm totally overwhelmed to pick one:
github.com/tobi/delayed_job/
github.com/defunkt/resque
http://backgroundrb.rubyforge.org/
And what is
http://godrb.com/
all about? How could that help me?
Has anyone hints how to solve this? Thanks a lot!
Jan
P.S. I'd like to post links to all the github projects but unfortunately I don't have enough 'reputation'
I'd definitely look into creating a process (daemon) that monitors the relevant directory. Then your Rails app can just put files into it without having to know anything about the back end, and it'll work with SSH too.
Your daemon can load the Rails environment & communicate with your database. I'd leave all the communication between them at that level.
As for making it start/stop with your rails app...are you sure? I use god (the ruby gem) to start/monitor processes. It will "daemonize" your Ruby app for you, too. If you want to, you can actually tell god to stop your directory-monitor process & then exit when Rails stops. And you can fire off god from a Rails initializer.
However, if you might find yourself using SSH or some other means to put files into that directory when rails is not running, you might look into putting a script into /etc/init.d to automatically start god when the server boots up.
HTH
I think you want something like Guard for monitoring the changes on the filesystem and performing actions when changes occur.
As for god, you should definitely look into it. It will make starting/stopping processes you depend on considerably easier. We used Bluepill for a while, but there are so many bugs, we ditched it and moved to God, which IMHO is a lot more pleasant to work with, for the mostpart.
Have you tried creating a script file eg:
startDaemon.rb
And then placing it:
config/initializers/
?
I have recently installed Nginx + Thin on my deployment server, but i am not sure how this will perform in last requests & responses situation. lets say 1000/req per sec.
so the speed on thin is good with 10-100 req /per sec
I wanted to know on higher volumes of data being processed on the request/response cluster.
Guide me on this :-)
Multiple thin processes and nginx are capable of providing lots of speed, depending on what your application is doing. So, the problem will be your application code, the speed of your application server, and your database server.
Scaling Rails has been recently covered in depth by the Scaling Rails Screencasts. I recommend you start there. My 5 step program to scaling Rails would be:
First step is to have the tools to look at what is slow in your application. Do not spend time optimizing everything in your application when you don't know what the problem is.
The easiest way to be able to handle lots of requests/second is with page caching.
If you can't do that, cache everything possible (fragment caching, use memcached to cache data, etc), to speed up your application.
After that, optimize your application as best as possible, make SQL queries fast, index everything, etc.
If you still need more speed, throw more hardware at the problem. Get a big, powerful database server, a bunch of app servers, and proxy your requests across them. You can start here, too, but it will only delay the optimization process.
If you have a single server I think that the main key is, apart from everything already mentioned, is don't skimp on the specs of it. Trying to get too much to run on too little is just a recipe for disaster.
It is also a good idea to get monit or God monitoring your thin instances, I started out with God, but it leaked memory pretty bad on Ruby 1.8.6 so I stop using it in favour of monit. Monit is written in C I believe and has a tiny memory footprint so I'd recommend that one.
If all that seems like a bit much to keep nginx and thin playing nicely you may want to look into an all in one solution like Passenger or LiteSpeed. I have very little experience with these so can offer no substancial advice for them.
It seems that the only tutorials out there talking about using Amazon's SimpleDB in a rails site are using AWSDBProxy... Personally, I find this counter-intuitive to scaling out, considering the server layout of a typical Rails site below (using AWSDBProxy):
Plugin here: http://agilewebdevelopment.com/plugins/aws_sdb_proxy
Image here: http://www.freeimagehosting.net/uploads/91be4e0617.png
As you can see, even if we add more mongrels, we have two problems.
We have a single point of failure far less stable than our load balancer
We have to force all our information through this one WEBrick server
The solution is, of course, to add more AWSDBProxies... but why not then just use the following code in say, a class, skipping the proxy all together?
service = AwsSdb::Service.new(Logger.new(nil),
CONFIG['aws_access_key_id'],
CONFIG['aws_secret_access_key'])
service.query(domain, query)
So what I'm getting at, is if you are using AWSDBProxy, what are you justifications for it? And if you are indeed using it, what is your performance like? If you have hard numbers, this would be even more appreciated!
I'm not using it, nor have I ever heard of it, but this is what I would think are reasonable reasons.
You're running your main app server on EC2, so the chance of Internet FAIL doesn't really affect you more than once.
You run one proxy on each of your app servers. So it's connection going down is no worse than it's connection(s) to the database going down.
Because it can be done. This is as good a reason as any in an open source project. Sometimes it takes building a thing before you know whether said thing is a good/bad idea.
You don't have the traffic levels to need a load balancer. Then your diagram squashes down to a line, if not a single machine.
Should i stay out of rails if a client has a cheap hosting service with a provider that do not support mod_rails? Will rails + fast.cgi provide a good experience for a user or should I choose, in this scenario, php + my-favorite-framework as platform ?
Regards,
Victor
Fastcgi should be fine. Though it has been generally recommended to host rails apps on a platform that you own. There are some pretty affordable virtual private servers out there that let you do this.
I have three clients on inexpensive hosting plans using FastCGI and have not run into any issues due to FastCGI itself. These are all low traffic sites where Mongrel was not necessary.
Will rails + fast.cgi provide a good experience for a user
It all depends on what you're trying to do. If you're going to build a site where users will uploading and playing video then no FastCGI is not a good choice.
or should I choose, in this scenario, php + my-favorite-framework as platform
You always choose the right tool for the job. Without any details on what you are trying to build I'm not sure anyone here will be able to tell you how to build it.
My experience on low end hosts was really really bad. Constantly having my mongrel instances die inexplicably. Since switching to a slice I have had zero problems running it on my own.
I would tend to avoid FastCGI. I haven't used it myself but I've read enough horror stories about it to never want to.
If the hosting company is going to be completely responsible for managing the server instance and you can trust them to be the ones who will make sure the app is always up and running, then maybe it would work. I doubt this is the case though, and if you don't own the servers I think you'll run into a lot of problems troubleshooting the all the weird bugs FastCGI will inevitably throw at you.
Don't worry about mod_rails: it's new and Rails sites were running fine before it turned up. It's nice to have, I'm sure, but not a necessity.
By the time you're looking to get rails to scale to volumes that really need mod_rails, the site should be worth putting into an environment that runs it.
Anybody know a nice way to restart a mongrel cluster via capistrano in a "rolling" style, eg, one mongrel at a time. Would be great to have a bit of wait time in there as well for each, to let the mongrel load the rails app up as well.
I've done some searching, and haven't found too much, so looking for help before I dive into the mongrel_cluster gem myself.
Thanks!
I agree with the seesaw approach more than the rolling approach you are seeking. The problem is that you end up in situations where load balancing can throw users back and forth between different versions of the application while you are transitioning.
The solutions we came up with (before finding SeeSaw, which we don't use) was to take half of the mongrels off line from the load balancer. Shut them down. Update them. Start them up. Put those mongrels back online in the load balancer and take the other half off. Shut the second half down. Update the second half. Start them up. This greatly minimizes the time where you have two different versions of the application running simultaneously.
I wrote a windows bat file to do this. (Deploying on Windows is not recommended, btw)
It is very important to note that having database migrations can make the whole approach a little dangerous. If you have only additive migrations, you can run those at any time before the deployment. If you are removing columns, you need to do it after the deployment. If you are renaming columns, it is better to split it into a create a new column and copy data into it migration to run before deployment and a separate script to remove the old column after deployment. In fact, it may be dangerous to use your regular migrations on a production database in general if you don't make a specific effort to organize them. All of this points to making more frequent deliveries so each update is lower risk and less complex, but that's a subject for another response.
Seesaw is a gem found in the Rails Oceania Rubyforge Project that provides this kind of functionality to mongrel clusters. However, the project may be suffering from some bit-rot not havain had a release since 2007. Still worth a look even just to pinch the ideas :)
#!/bin/bash
for PIDFILE in /tmp/mongrel.*; do
PID=$(cat ${PIDFILE})
kill ${PID}
${RUN_MONGREL_CMD} ${PID}
sleep 2
done