Thinking_sphinx starts but rake still fails - ruby-on-rails

I am using Rails 3.0.7 and the Thinking_sphinx 2.0.13.
Background:
I'm trying to create a deploy script (capistrano) where sphinx will be stoped before capistrano updates the code, and then started again when the update is done.
However, the rake task for thinking_sphinx:start fails.
The commands below are run in this exact order.
$ rake thinking_sphinx:stop
Stopped search daemon (pid 54117).
searchd is not running
$ ps aux | grep searchd
xx 54597 0.0 0.0 2434892 532 s002 R+ 4:04PM 0:00.00 grep searchd
$ rake thinking_sphinx:start
Started successfully (pid 54618).
rake aborted!
searchd is already running.
Tasks: TOP => thinking_sphinx:start
(See full trace by running task with --trace)
$ ps aux | grep searchd
xx 54637 0.8 0.0 2434892 448 s002 R+ 4:06PM 0:00.00 grep searchd
xx 54618 0.0 0.0 2442992 396 s002 S 4:05PM 0:00.02 searchd --pidfile --config /Users/emil/code/wd/config/development.sphinx.conf
So yes, it starts but the rake task fails. When I run these commands in "cap deploy", capistrano will rollback my updated code because the rake task fails.
How do I solve this? I have no ideas left.

Related

"bundle exec thin start -C config/thin.yml" does not start thin

in attempt to deploy rails app to server I faced problem that 'thin' does't stars when I try do star it with cap production deploy:start. What is realy strange, than it hasn't any errors.
After this I try do it on deplyment server
env RAILS_ENV=production bundle exec thin start -C config/thin.yml
Starting server on /home/deployer/app/current/tmp/sockets/thin.0.sock ...
Starting server on /home/deployer/app/current/tmp/sockets/thin.1.sock ...
ls /home/deployer/app/current/tmp/sockets/
ps -aux | grep thin
root 16769 0.0 0.1 15468 908 pts/0 S 11:34 0:00 grep --color=auto thin
thin.yml
chdir: /home/deployer/app/current
environment: production
timeout: 30
log: /home/deployer/app/current/log/thin.log
pid: /home/deployer/app/current/tmp/pids/thin.pid
socket: /home/deployer/app/current/tmp/sockets/thin.sock
max_conns: 1024
max_persistent_conns: 10
require: []
wait: 30
servers: 2
daemonize: true
What is gone wrong?
In production.log only migrations
bundle exec thin start -C config/thin.yml &
returns
Starting server on /home/deployer/app/current/tmp/sockets/thin.0.sock ...
Starting server on /home/deployer/app/current/tmp/sockets/thin.1.sock ...
'bundle exec thin start -C confi…' has ended
Answer
Okey, answer was log/thin.0.log there are some errors in code
You need to demonize thin for running it in production by adding &. Try this:
RAILS_ENV=production bundle exec thin start -C config/thin.yml &

Monit & Rails sunspot_solr

I've setup monit to monitor my sunspot_solr process, which seems to work at first. If I restart the monit service with sudo service monit restart my sunspot process starts:
ps aux | grep sunspot
root 4086 0.0 0.0 9940 1820 ? Ss 12:41 0:00 bash ./solr start -f -s /ebs/staging/shared/bundle/ruby/2.3.0/gems/sunspot_solr-2.2.4/solr/solr
root 4137 45.1 4.8 1480560 185632 ? Sl 12:41 0:09 java -server -Xss256k -Xms512m -Xmx512m -XX:NewRatio=3 -XX:SurvivorRatio=4 -XX:TargetSurvivorRatio=90 -XX:MaxTenuringThreshold=8 -XX:+UseConcMarkSweepGC -XX:+UseParNewGC -XX:ConcGCThreads=4 -XX:ParallelGCThreads=4 -XX:+CMSScavengeBeforeRemark -XX:PretenureSizeThreshold=64m -XX:+UseCMSInitiatingOccupancyOnly -XX:CMSInitiatingOccupancyFraction=50 -XX:CMSMaxAbortablePrecleanTime=6000 -XX:+CMSParallelRemarkEnabled -XX:+ParallelRefProcEnabled -XX:CMSFullGCsBeforeCompaction=1 -XX:CMSTriggerPermRatio=80 -verbose:gc -XX:+PrintHeapAtGC -XX:+PrintGCDetails -XX:+PrintGCDateStamps -XX:+PrintGCTimeStamps -XX:+PrintTenuringDistribution -XX:+PrintGCApplicationStoppedTime -Xloggc:/ebs/staging/shared/bundle/ruby/2.3.0/gems/sunspot_solr-2.2.4/solr/server/logs/solr_gc.log -Djetty.port=8983 -DSTOP.PORT=7983 -DSTOP.KEY=solrrocks -Duser.timezone=UTC -Djetty.home=/ebs/staging/shared/bundle/ruby/2.3.0/gems/sunspot_solr-2.2.4/solr/server -Dsolr.solr.home=/ebs/staging/shared/bundle/ruby/2.3.0/gems/sunspot_solr-2.2.4/solr/solr -Dsolr.install.dir=/ebs/staging/shared/bundle/ruby/2.3.0/gems/sunspot_solr-2.2.4/solr -jar start.jar --module=http
ubuntu 4192 0.0 0.0 10460 936 pts/3 S+ 12:41 0:00 grep --color=auto sunspot
However, I'm also running tail -f /var/logs/monit.log and see this at the same time:
[CST Mar 3 12:42:54] error : 'sunspot_solr' process is not running
[CST Mar 3 12:42:54] info : 'sunspot_solr' trying to restart
[CST Mar 3 12:42:54] info : 'sunspot_solr' start: /usr/bin/sudo
[CST Mar 3 12:43:25] error : 'sunspot_solr' failed to start
Plus, to make sure monit can actually restart the sunspot_solr process, I run sudo kill -9 <the pid> and monit can't restart sunspot_solr:
[CST Mar 3 12:44:25] error : 'sunspot_solr' process is not running
[CST Mar 3 12:44:25] info : 'sunspot_solr' trying to restart
[CST Mar 3 12:44:25] info : 'sunspot_solr' start: /usr/bin/sudo
[CST Mar 3 12:44:55] error : 'sunspot_solr' failed to start
Obviously something is wrong with my monit-solr_sunspot.conf file, but after messing around with it for a few hours now, I'm stumped:
check process sunspot_solr with pidfile /ebs/staging/shared/pids/sunspot-solr.pid
start program = "/usr/bin/sudo -H -u root /bin/bash -l -c 'cd /ebs/staging/releases/20160226191542; bundle exec sunspot-solr start -- -p 8983 -d /ebs/staging/shared/solr/data --pid-dir=/ebs/staging/shared/pids'"
stop program = "/usr/bin/sudo -H -u root /bin/bash -l -c 'cd /ebs/staging/releases/20160226191542; bundle exec sunspot-solr stop -- -p 8983 -d /ebs/staging/shared/solr/data --pid-dir=/ebs/staging/shared/pids'"
I've adapted this monit script to suit my needs: Sample sunspot-solr.monit but am still having no luck!
UPDATE
I've gotten monit to successfully restart sunspot_solr if I kill it, however it still produces the error that it failed to restart in the monit.log file.
I think monit runs as root. You may not want to use sudo both because it prompts for a password and because monit doesn't need it.

Duplicate process in cron job using Whenever gem for Rails

Using Rails 3.2.21, whenever gem. This is the list of my crontab:
Begin Whenever generated tasks for: abc
0 * * * * /bin/bash -l -c 'cd /home/deployer/abc/releases/20141201171336 &&
RAILS_ENV=production bundle exec rake backup:perform --silent'
Here's the output when the scheduled job is run:
deployer#localhost:~$ ps aux | grep rake
deployer 25593 0.0 0.0 4448 764 ? Ss 12:00 0:00 /bin/sh -c /bin/bash -l -c
'cd /home/deployer/abc/releases/20141201171336 && RAILS_ENV=production bundle exec rake
backup:perform --silent'
deployer 25594 0.0 0.1 12436 3040 ? S 12:00 0:00 /bin/bash -l -c cd
/home/deployer/abc/releases/20141201171336 && RAILS_ENV=production bundle exec rake
backup:perform --silent
deployer 25631 69.2 4.4 409680 90072 ? Sl 12:00 0:06 ruby /home/deployer/abc/
shared/bundle/ruby/1.9.1/bin/rake backup:perform --silent
deployer 25704 0.0 0.0 11720 2012 pts/0 S+ 12:00 0:00 grep --color=auto rake
Notice the the top 2 processes are actually similar processes. Are they running 2 same jobs concurrently? How do I prevent that?
deployer 25593 0.0 0.0 4448 764 ? Ss 12:00 0:00 /bin/sh -c /bin/bash …
deployer 25594 0.0 0.1 12436 3040 ? S 12:00 0:00 /bin/bash …
Notice the the top 2 processes are actually similar processes. Are they running 2 same jobs concurrently?
No, they aren't. The first is a /bin/sh that started the second, the crontab command /bin/bash …. Most probably /bin/sh is just waiting for termination of /bin/bash and not running again before /bin/bash … has finished execution; you can verify this with e. g. strace -p 25593.
Check your scheduled.rb for a duplicate entry, if you find then remove and deploy.
If there is no duplicate entry in scheduled.rb then you need to remove/comment it from cron tab.
To delete or comment jobs in cron take a look at https://help.1and1.com/hosting-c37630/scripts-and-programming-languages-c85099/cron-jobs-c37727/delete-a-cron-job-a757264.html OR http://www.esrl.noaa.gov/gmd/dv/hats/cats/stations/qnxman/crontab.html

Trying to get delayed_job 3.0.4 running as a daemon

So i have delayed_job installed on a production app. It runs fine via rake jobs:work. But when i try to start the script via capistrano:
run "if [ -d #{current_path} ]; then cd #{current_path} && RAILS_ENV=#{rails_env} script/delayed_job start -n 2; fi"
It starts without errors. But if i check script/delayed_job status it tells me that no instances are running. Any suggestions?
Edit
looks like there is something running (via sudo ps aux | grep delayed):
1000 7952 0.0 0.1 112312 832 pts/0 S+ 16:17 0:00 grep delayed
Output when i run the script:
/path/to/latest/release/config/initializers/bypass_ssl_verification_for_open_uri.rb:2: warning: already initialized constant VERIFY_PEER
Check the permissions on your shared/tmp/pid folder.
Delayed Job won't run unless the user that capistrano is running has has permission to write the PID file into the folder.
This is how I start the delayed job demon using capistrano, maybe this will also work for you:
require "delayed/recipes"
%w[start stop restart].each do |command|
after "deploy:#{command}", "delayed_job:#{command}"
end
The output of ps aux | grep delayed only shows its own process, so DJ is not running on your machine. Maybe this has something to do with your if-clause. You could try removing that and see if it start correctly using the ps aux | grep commands. The output should be something like this:
username 9989 0.0 0.0 7640 892 pts/0 S+ 10:54 0:00 grep delayed
username 10048 0.0 9.4 288244 99156 ? Sl Jan22 2:16 delayed_job

How do I stop delayed_job if I'm running it with the -m "monitor" option?

How do I stop delayed_job if I'm running it with the -m "monitor" option? The processes keep getting restarted!
The command I start delayed_job with is:
script/delayed_job -n 4 -m start
The -m runs a monitor processes that spawns a new delayed_job process if one dies.
The command I'm using to stop is:
script/delayed_job stop
But that doesn't stop the monitor processes, which in turn start up all the processes again. I would just like them to go away. I can kill them, which I have, but I was hoping there was some command line option to just shut the whole thing down.
In my capistrano deploy script I have this:
desc "Start workers"
task :start_workers do
run "cd #{release_path} && RAILS_ENV=production script/delayed_job -m -n 2 start"
end
desc "Stop workers"
task :stop_workers do
run "ps xu | grep delayed_job | grep monitor | grep -v grep | awk '{print $2}' | xargs -r kill"
run "cd #{current_path} && RAILS_ENV=production script/delayed_job stop"
end
To avoid any errors that may stop your deployment script:
"ps xu" only show processes owned by the current user
"xargs -r kill" only invoke the kill command when there is something to kill
I only kill the delayed_job monitor, and stop the delayed_job deamon the normal way.
I had this same problem. Here's how I solved it:
# ps -ef | grep delay
root 8605 1 0 Jan03 ? 00:00:00 delayed_job_monitor
root 15704 1 0 14:29 ? 00:00:00 dashboard/delayed_job
root 15817 12026 0 14:31 pts/0 00:00:00 grep --color=auto delay
Here you see the delayed_job process and the monitor. Next, I would manually kill these processes and then delete the PIDs. From the application's directory (/usr/share/puppet-dashboard in my case):
# ps -ef | grep delay | grep -v grep | awk '{print $2}' | xargs kill && rm tmp/pids/*
The direct answer is that you have to kill the monitor process first. However AFAIK there isn't an easy way to do this, I don't think the monitor PIDs are stored anywhere and the DJ start and stop script certainly doesn't do anything intelligent there, as you noticed.
I find it odd that the monitor feature was included -- I guess Daemons has it so whomever was writing the DJ script figured they would just pass that option down. But it's not really usable as it is.
I wrote an email to the list about this a while back, didn't get an answer: https://groups.google.com/d/msg/delayed_job/HerSuU97BOc/n4Ps430AI1UJ
You can see more about monitoring with Daemons here: http://daemons.rubyforge.org/classes/Daemons.html#M000004
If you come up with a better answer/solution, add it to the wiki here: https://github.com/collectiveidea/delayed_job/wiki/monitor-process
if you can access server, you can try these commands:
ps -ef | grep delayed_job
kill -9 XXXX #xxxx is process id
OR
cd path-to-app-folder/current
RAILS_ENV=production bin/delay_job stop
RAILS_ENV=production bin/delay_job start
You can also add this script to capistrano3 in config/deploy.rb
namespace :jobworker do
task :start do
on roles(:all) do
within "#{current_path}" do
with rails_env: "#{fetch(:stage)}" do
execute "bin/delayed_job start"
end
end
end
end
task :stop do
on roles(:all) do
within "#{current_path}" do
with rails_env: "#{fetch(:stage)}" do
execute "bin/delayed_job stop"
end
end
end
end
end
then run
cap
cap production jobworker:stop
cap production jobworker:start

Resources