I have been thinking about this and searching this for ages without finding anything, so I am going to assume I hit the XY problem.
Let me describe my issue, this sounds common enough.
We use capistrano to deploy our web app and db. The relevant part is that we have a dedicated server for delayed job and we use capistrano to deploy to it and start/restart the processes. This is a custom number of workers with 2 different Gemfiles and 3 queues.
What I want to do is to start them up on server restart or, more to the point, on server clone + start.
I have tried calling cap production delayed_job:custom_start from the server itself.. didn't work. (This is the core of my non XY problem adjusted question). Not sure it even makes sense. But I want to know if it is possible. custom_start is a task that starts our set of workers.
Alternatively I am thinking of abstracting the code into a rake task or a script or something and calling it from both capistrano and where ever I would need to add it to start on startup. Does this make more sense?
Edit: Just found this post.. discouraging..
p.s. I just want to clarify that when I say server I mean my Machine/ec2 instance and not my web app restarting.
My Jenkins deploy jobs are littered with direct tasks developers can call such as cap dev app:fetch_logs, cap qa sanitize_environment, etc.
This feature of Capistrano is easy and verified.
What I am guessing you want to do is use Capistrano to setup rc.d files. Yes, you can do this. Should you use chef/puppet at this point? Consider. Scalr/Rightscale are funs things to look at too.
You can add a bash script as an .erb template for all your worker variables, then put upload the script into the deploy_to directory. Finally, you can setup another task (#{sudo}) to inject rc.d wrappers into rc.d. Or you rather than rc.d wrappers to you bash script, just call the bash script from rc.d-local. You can use sed to append to rc.d-local.
I ended up moving the delayed job related logic to its own script that accepts start/stop, and delegating to this script from capistrano. This means I can add this script to my rc.local.
Related
I've never worked with Capistrano before and currently I am fighting the urge to just scrap it and go back to my old manual ways.
As I understand, Capistrano V3 does not create the initial database because they feel it is the duty of the DB administrator.
So I must be missing something but I have followed their instructions but the initial cap staging deploy fails when it gets to the rake db:migrate step because the database does not exist.
Because of the failure, the symlink for current -> releases never gets created.
Is it just accepted general practice that we SSH into our boxes and cd into the first folder under releases and manually run rake db:create...?
And then from there, am I supposed to just run cap staging deploy again so that it finishes creating the symlinks?
Seems hacky for something that is supposed to make things easier and I am not sure if I am understanding this correctly or not.
Thanks.
It does make sense to leave certain things out of a deployment. As the initial set up and the routine deployments are very separate functions and require different specialties, or in large deployments even different skillsets. That said.. I'm totally with you - on the first deploy having to manually set up the database and certain files (specifically linked files like secrets.yml) is a step that just wastes my time.
I use this plugin:
https://github.com/capistrano-plugins/capistrano-postgresql
just add the require capistrano/postgresql to your capfile as you would any plugin
then run cap staging setup before the first time you run cap staging deploy
Our current production deployment uses jenkins to deploy a warble generated war file to Tomcat. The whole thing works like a charm. The problem I'm running into however is how to kick start up sidekiq's workers on this machine via "bundle exec sidekiq [options]". Ideally I'd love to avoid setting up a whole seperate ruby environment on this machine just to do this, but either way to run properly, sidekiq needs access to the exploded/installed apps environment etc.
Is there an accepted way to do something like this? Is there a better way to startup sidekiq in instances like this beyond bundle?
This project may be of help. It allows you to package anything that can be a rake task into a jar file. Their documentation has some specific notes about warbler use. Have a look!
For notes on how to run Sidekiq from outside of the command line run something like this from your project root:
cat $(bundle show sidekiq)/bin/sidekiq
You should see some lines:
cli = Sidekiq::CLI.instance
cli.parse
cli.run
If you read into the CLI class, you'll notice that parse takes either ARGV as the default argument, but you can override it with your own arguments:
cli.parse "-q myqueue -e production".split(' ')
I have a rake task which parses a streaming API and enters data into database. The streaming API is live feed and the rake task should run continuously for the live data to enter the database. The rake task once called will run continuously and parse the data. Now i have started the rake task and it is running. The problem is that if i close the terminal or reboot the server, the rake task wil be stopped. So, i want a script in linux (something like the one used to start, or stop apache server), which does the following:
1. start the rake task by calling rake command (rake parse:stream) from the RAILS-ROOT (application directory of Rails app)
2. stop the rake task by killing the process.
3. start the rake task automatically when the server reboots.
i am not familiar to linux scripts and i dont know where to start. i am using ubuntu server. can anyone help me?
Here's an article that might help you also. It discussed various options for managing Ruby applications and their related processes:
http://michaelvanrooijen.com/articles/2011/06/08-managing-and-monitoring-your-ruby-application-with-foreman-and-upstart/
You need to run your script as a daemon. When I create this kind of startup scripts I usually make 2 files, one that stays in /etc/init.d and handles the start/stop/status/restart commands and another one that actually does the job and gets called by the first script.
Here is one solution, and although the daemon script is written in perl, you want to run some command lines only, so daemonizing a perl script could do your job easily.
If you want, there are also ruby gems for daemonizing scripts, so you can write a script in ruby that does the rake tasks.
And if you want to go hardcore, there are solutions for writing bash scripts that can daemonize, but I'm not sure I would recommend a solution like that; at least I find them pretty difficult to use.
Take a look at how Github's Resque project does it.
Essentially they create tasks for starting/restarting/stopping a particular task, in this case resque:work. Note that the restart_workers task simply invokes the other tasks, stop and start. It should be really easy to change this for what you want.
I would like my development environment for Rails to automatically start redis and resque (and potentially in other projects, mongod, mysql-server etc.) for me, in the following cases:
When starting up the development server rails server.
Additionally, it would be nice if the following cases detect already running services, and, if not running start them up too:
Rake rspec, rspec /spec, when running tests.
When starting up a rails console.
When shutting down the rails server, the started child-services should be shut down too.
What is the correct place for such additional startup scripts?
And how to avoid them being started in production too (where I run everything trough /etc/init.d services)?
A lot of these built-in tasks are available as rake tasks already.
You can create a master rake task that does it all.
For example, with resque, you get "rake resque:start" "rake resque:scheduler:start", etc.
You can create a generic "start" task that depends on the rest. Similarly, a "stop" task would shut everything down.
So you would do:
rake start # starts all associated processes
rake stop # stops them all
This is also very use to use from Capistrano, when you end up deploying your code somewhere else. Rake tasks are very easy to call from Capistrano.
I think it's really better to do that in some external script. Do it in your rails server command can be really annoying to anyone to try your code.
By example, in one year, a nez developper come to your project. He can be desoriented if your rails server commande launch a such of other application in background.
In same idea, if you do that you need maintain your code in your rails env. Can be a little tricky. Maintain an independant script can be more usefull.
You can add your script in script directory. That be a good pratice. But not when you launch a command with a manual who do not that.
I finally got the DelayedJobs plugin working for Rails 2, and it does indeed work fine...as long as I run:
rake jobs:work
Just like the readme says, to be fair.
BUT, this doesn't fit my requirements...what kind of background task requires you to have a shell open, and a command running? That'd be like having to say script/server to run my rails app, and never getting that -d option so it'll keep running even after I close my shell.
Is there ANY way to keep the workers getting processed in the backgroun, or in daemon mode, or whatever?
I had a ray of hope when I saw the
You can also run by writing a simple
#script/job_runner#, and invoking it
externally:
Line in the readme...but...that just does the exact same thing the rake task does, you just call it a different way.
What I want:
I want to start my rails app, then start whatever will process the workers, and have BOTH of them run invisibly in the background, without the need for me to babysit it and keep the shell that started it running.
(My server is something I SSH into, so I don't want to have that shell that SSHed into it running 24/7 (especially since I like to turn off my local computer now and again)).
Is there any way to acomplish this?
You can make any *nix command run on the background by appending an & to its end:
rake jobs:work &
Just make sure you exit the shell (or use the disown command) to detach the process from your login session... Otherwise, if your session disconnects, the processes you own will be killed with it.
Perhaps Beanstalkd and Stalker?
Beanstalk is a fast and easy way to queue background tasks. Stalker provides a nice wrapper interface for creating these jobs.
See the railscast on it for more information
Edit:
You could also run that rake task as a cronjob which would mean the server would run it periodically without you needing to be logged in
Use the collectiveidea fork of delayed_job... It's more actively developed and has support for running the jobs in a daemon without any extra messing about.
My capistrano script calls
RAILS_ENV=production script/delayed_job start