Our current production deployment uses jenkins to deploy a warble generated war file to Tomcat. The whole thing works like a charm. The problem I'm running into however is how to kick start up sidekiq's workers on this machine via "bundle exec sidekiq [options]". Ideally I'd love to avoid setting up a whole seperate ruby environment on this machine just to do this, but either way to run properly, sidekiq needs access to the exploded/installed apps environment etc.
Is there an accepted way to do something like this? Is there a better way to startup sidekiq in instances like this beyond bundle?
This project may be of help. It allows you to package anything that can be a rake task into a jar file. Their documentation has some specific notes about warbler use. Have a look!
For notes on how to run Sidekiq from outside of the command line run something like this from your project root:
cat $(bundle show sidekiq)/bin/sidekiq
You should see some lines:
cli = Sidekiq::CLI.instance
cli.parse
cli.run
If you read into the CLI class, you'll notice that parse takes either ARGV as the default argument, but you can override it with your own arguments:
cli.parse "-q myqueue -e production".split(' ')
Related
I have a simple shell script (happy.sh) that I am currently running by hand (. ./happy.sh) every time I restart a rails server. It sets some environment variables that I need because of various APIs I'm using. However, I was wondering if there's some way of having rails run the script itself whenever I run "guard" or "rails s"
Thanks
If you use foreman, you can define all processes you needed started on application start into a Procfile. (including bbundle exec rails server -p $PORT)
By calling foreman start, all the process starts up.
More information can be gotten here on this cast
Proper way of setting ENV variables is putting them in bash_proflle or bashrc depending of linux distro.
vi ~/.bash_proflle
And then add something like
export MY_RAILS_VAR=123
Then you don't need to run any ENV initialization scripts on rails start.
I have a Rails runner task that I want to run from cron, but of course cron runs as root and so the environment is set up improperly to get RVM to work properly. I've tried a number of things and none have worked thus far. The crontab entry is:
* 0 * * * root cd /home/deploy/rails_apps/supercharger/current/ && /usr/local/rvm/wrappers/ruby-1.9.3-p484/ruby bundle exec rails runner -e production "Charger.start"
Apologies for the super long command line. Anyhow, the error I'm getting from this is:
ruby: No such file or directory -- bundle (LoadError)
So ruby is being found in the RVM directory, but again, the environment is wrong.
I tried rvm alias delete [alias_name] and it seemed to do something, but darn if I know where the wrapper it generated went. I looked in /usr/local/rvm/wrappers and didn't see one with the name I had specified.
This seems like a common problem -- common enough that the whenever gem exists. The runner command I'm using is so simple, it seemed like a slam dunk to just put this entry in the crontab and go, but not so much...
Any help with this is appreciated.
It sounds like you could use a third-party tool to tether your Rails app to cron: Whenever. You already know about it, but it seems you never tried it. This gem includes a simple DSL that could be applied in your case like:
every :day # Or specify another period, or something else, see README
runner "Charger.start"
end
Once you've defined your schedule, you'll need to write it into crontab with whenever command line utility. See README file and whenever --help for details.
It should not cause any performance impact at runtime since all it does is conversion into crontab format upon deployment or explicit command. It's not needed, once the server is running, everything is done by cron after that.
If you don't want an extra gem anyway, you might as well check what command does it issue for executing your task. Still, an automated way of adding a cron task is easier to maintain and to deploy. Sure, just tossing a line into the crontab is easier — just for you and just this once. Then it starts to get repetitive and tiring, not to mention confusion for other potential developers who will have to set up something similar on their own machines.
You can run cron as different user than root. Even in your example the task begins with
* 0 * * * root cd
root is the user that runs the command. You can edit it with crontab -e -u username.
If you insist on running cron task as root or running as other user does not work for some reason, you can switch user with su. For example:
su - username -c "bundle exec rails runner -e production "Charger.start"
I have been thinking about this and searching this for ages without finding anything, so I am going to assume I hit the XY problem.
Let me describe my issue, this sounds common enough.
We use capistrano to deploy our web app and db. The relevant part is that we have a dedicated server for delayed job and we use capistrano to deploy to it and start/restart the processes. This is a custom number of workers with 2 different Gemfiles and 3 queues.
What I want to do is to start them up on server restart or, more to the point, on server clone + start.
I have tried calling cap production delayed_job:custom_start from the server itself.. didn't work. (This is the core of my non XY problem adjusted question). Not sure it even makes sense. But I want to know if it is possible. custom_start is a task that starts our set of workers.
Alternatively I am thinking of abstracting the code into a rake task or a script or something and calling it from both capistrano and where ever I would need to add it to start on startup. Does this make more sense?
Edit: Just found this post.. discouraging..
p.s. I just want to clarify that when I say server I mean my Machine/ec2 instance and not my web app restarting.
My Jenkins deploy jobs are littered with direct tasks developers can call such as cap dev app:fetch_logs, cap qa sanitize_environment, etc.
This feature of Capistrano is easy and verified.
What I am guessing you want to do is use Capistrano to setup rc.d files. Yes, you can do this. Should you use chef/puppet at this point? Consider. Scalr/Rightscale are funs things to look at too.
You can add a bash script as an .erb template for all your worker variables, then put upload the script into the deploy_to directory. Finally, you can setup another task (#{sudo}) to inject rc.d wrappers into rc.d. Or you rather than rc.d wrappers to you bash script, just call the bash script from rc.d-local. You can use sed to append to rc.d-local.
I ended up moving the delayed job related logic to its own script that accepts start/stop, and delegating to this script from capistrano. This means I can add this script to my rc.local.
I'm using delayed job to create job queues such as 'mailer'
For this to work I have to run this:
$ RAILS_ENV=development QUEUE=mailer rake jobs:work
But if the server crashes and is restarted, I need the worker to start running again automatically.
What would be the recommended way to deal with this?
You need to use a third-party service like monit/bluepill/god/upstart to watch the process and restart it. I recommend using the combination of foreman and upstart. See here: http://blog.daviddollar.org/2011/05/06/introducing-foreman.html
Some time ago I wrote a patch for the DelayedJob to reload the classes for every job in development mode. Same patch should work for your requirement also.
betamatt's approach is definitely one way to do it if you have such a monitoring tool in place.
Another way to do it would simply be to add a script to your OS's startup which runs the RAILS_ENV=development QUEUE=mailer rake jobs:work command under a user who has the necessary permissions.
Here's an example of how to do it on Ubuntu using Upstart, but if you lookup similar init.d methods, or whatever is the relevant for your server OS, you'll find other ways. What you're looking for, basically, is "How to run a script on startup [your OS name]", and then wrap your command in an executable script.
I had the same issue with my application am working with. So i wrote a rake task which runs every minute(as a cron job). When delayed job starts it will create a .pid file in the temp folder. I used this to check the existence of a delayed job process. If the file doesn't exist i ran the console command through code.
delayed_job_status = File.file?("./tmp/pids/delayed_job.pid")
This will check the existence of process. If nil response go to next statement
./bundle exec script/delayed_job start production
This will start delayed job
My solution was creating the bash script in user's home "delayed_job_startup.sh"
which contain
#!/bin/bash
cd /home/deploy/project/current/
RAILS_ENV=production bin/delayed_job start
and in file /etc/rc.local I run this script from my user
su -s /bin/bash - deploy /home/deploy/delayed_job_startup.sh
I have a rake task which parses a streaming API and enters data into database. The streaming API is live feed and the rake task should run continuously for the live data to enter the database. The rake task once called will run continuously and parse the data. Now i have started the rake task and it is running. The problem is that if i close the terminal or reboot the server, the rake task wil be stopped. So, i want a script in linux (something like the one used to start, or stop apache server), which does the following:
1. start the rake task by calling rake command (rake parse:stream) from the RAILS-ROOT (application directory of Rails app)
2. stop the rake task by killing the process.
3. start the rake task automatically when the server reboots.
i am not familiar to linux scripts and i dont know where to start. i am using ubuntu server. can anyone help me?
Here's an article that might help you also. It discussed various options for managing Ruby applications and their related processes:
http://michaelvanrooijen.com/articles/2011/06/08-managing-and-monitoring-your-ruby-application-with-foreman-and-upstart/
You need to run your script as a daemon. When I create this kind of startup scripts I usually make 2 files, one that stays in /etc/init.d and handles the start/stop/status/restart commands and another one that actually does the job and gets called by the first script.
Here is one solution, and although the daemon script is written in perl, you want to run some command lines only, so daemonizing a perl script could do your job easily.
If you want, there are also ruby gems for daemonizing scripts, so you can write a script in ruby that does the rake tasks.
And if you want to go hardcore, there are solutions for writing bash scripts that can daemonize, but I'm not sure I would recommend a solution like that; at least I find them pretty difficult to use.
Take a look at how Github's Resque project does it.
Essentially they create tasks for starting/restarting/stopping a particular task, in this case resque:work. Note that the restart_workers task simply invokes the other tasks, stop and start. It should be really easy to change this for what you want.