Don't Log A Specific Sidekiq Job In Rails - ruby-on-rails

I have a sidekiq job in rails and i dont want that job to write log in bundle exec sidekiq. How do i do that?
I am getting logs using this command "sudo tail -n 1000 /var/log/syslog | grep sidekiq". My job returns last 1000 sidekiq logs (1000 lines, each job approximately has 4 lines: start, enqueued with arguments (this part shows the data job taken), performed, done) and when i called this job many times, this causes to grow response larger and larger, because this job's log data has last 1000 sidekiq log. If i called it 5 times that means last jobs data will be massive(almost 16000).
I tried to use logfilterer which chatgpt suggested to me. i created logfilterer in lib file and created new logger set this log's loglevel to error level in /initializers/sidekiq.rb but that didn't work for me or i couldn't do it properly.
I tried to get 1000 logs and remove each line which includes that job's name but somehow data still getting larger.

You can redirect the stdout and stderr to dev/null:
bundle exec sidekiq > /dev/null 2>&1
Or disable logging in the Sidekiq initializer:
Sidekiq.configure_server do |config|
config.logger = Logger.new("/dev/null")
end
That said, your question is not clear, becuase you're grepping specifically for "sidekiq" word. You could use grep -v "sidekiq" to filter the lines, containing "sidekiq", because generally, it is not a good practice to disable logging.
Disabling logging for a specifical job seems not to be implemented. To minimize the logging, you can set, though, the log level in a job to :fatal:
sidekiq_options log_level: :warn

Related

Rails - Old cron job keeps running, can't delete it

So I'm using Rails and I have a few Sidekiq workers, but none are enabled. I'm using the sidekiq-cron gem, which requires you to put files in app/workers/, configure a sidekiq scheduler in config/sidekiq_schedule.yml, and also add a few lines in config/initializers/sidekiq.rb. However, I've commented everything out from sidekiq_schedule.yml and also commented the following lines out from sidekiq.rb:
# Sidekiq scheduler.
# schedule_file = 'config/sidekiq_schedule.yml'
# if File.exists?(schedule_file) && Sidekiq.server?
# Sidekiq::Cron::Job.load_from_hash! YAML.load_file(schedule_file)
# end
However, if I launch Sidekiq, every minute (which is the old schedule), I see this in the prompt:
2018-01-19T02:54:04.156Z 22197 TID-ovsidcme8 ActiveJob::QueueAdapters::SidekiqAdapter::JobWrapper JID-8609429b89db2a91793509ea INFO: start
2018-01-19T02:54:04.164Z 22197 TID-ovsidcme8 ActiveJob::QueueAdapters::SidekiqAdapter::JobWrapper JID-8609429b89db2a91793509ea INFO: fail: 0.008 sec
and it fails because it's trying to launch code a job that's not supposed to be launching.
I've went to the rails console prompt (rails -c) and tried to find the job, but nothing's in there:
irb(main):001:0> Sidekiq::Cron::Job.all
=> []
so I'm not quite sure why it's constantly trying to launch a job. If I go to the rails interface on my application, I don't see anything in the queue, nothing being processed, busy, retries, enqueued, nothing.
Any suggestions would be greatly appreciated. I've been trying to hunt this down for like the last hour and have no success. I even removed ALL of the workers from the workers directory, and yet it's still trying to launch one of them.
Because you have already load jobs, I think that those jobs configuration are still in REDIS. Checking this assumption by opening a new terminal tab with redis-cli:
KEYS '*cron*'
If there are those keys on REDIS, clear them will fix your issue.
Since you mentioned a cron job in your title but not in the question, I'm assuming there's a cronjob running the background sidekiq task.
Try running crontab - l in Terminal to see all your cron jobs. If you see something like "* * * * *", that means there's a job that is running every minute.
Then, use crontab - r to clear your cron tab and delete all scheduled tasks.

How many rails instances does delayed job initialize if running multiple pools

I'm running Delayed Job with the pool option like:
bundle exec bin/delayed_job -m --pool=queue1 --pool=queue2 start
Will this spawn one OR multiple rails instances? (ie: will it spawn one instance for all the pools or will every pool get its own rails instance)?
When testing locally it seemed to only spawn one rails instance for all the pools.
But I want to confirm this 100% (esp on production).
I tried using commands like these to see what the DJ processes were actually pointing to:
ps aux, lsof, pstree
Anyone know for sure how this works, or any easy way to find out? I started digging through the source code but figured someone prob knows a quicker way.
Thanks!
It should spawn multiple processes, not sure why you're seeing only one.
From the readme:
Use the --pool option to specify a worker pool. You can use this option multiple times to start different numbers of workers for different queues.
The following command will start 1 worker for the tracking queue, 2 workers for the mailers and tasks queues, and 2 workers for any jobs:
RAILS_ENV=production script/delayed_job --pool=tracking --pool=mailers,tasks:2 --pool=*:2 start
Further details after discussion in comments
The question mentions "Rails instances", but instance is a generic term. The word you're looking for is process. The text quoted from DelayedJob's readme uses the word worker, short for worker process. In Rails, you usually refer to server processes as just servers, and to worker processes as just workers.
The rails console, too, is just another process.
In Rails all these processes will load the whole application, but will do different things.
Server processes will wait for incoming HTTP requests and send back responses; worker processes will periodically poll a queue (DelayedJob uses the DB) and execute jobs; the console process will start a REPL and wait for input.
They will all have access to the same code (models, DB config, assets, view template, etc), but will have very different responsibilities.
I hope this makes things clearer.
After digging through the code the short answer is..
Running something like this:
bundle exec bin/delayed_job -m --pool=queue1 --pool=queue2 start
will start ONE rails process/instance for ALL the pools/queues you specify.
Details below if you want more explanation:
In the Command class:
this loops through and setups up the workers:
def setup_pools
worker_index = 0
#worker_pools.each do |queues, worker_count|
options = #options.merge(:queues => queues)
worker_count.times do
process_name = "delayed_job.#{worker_index}"
run_process(process_name, options)
worker_index += 1
end
end
end
Which will run this for each queue:
def run(worker_name = nil, options = {})
Dir.chdir(root)
Delayed::Worker.after_fork
Delayed::Worker.logger ||= Logger.new(File.join(#options[:log_dir], 'delayed_job.log'))
worker = Delayed::Worker.new(options)
worker.name_prefix = "#{worker_name} "
worker.start
Each worker is daemonized, but there aren't new rails processes started. It just loops through each pool/queue in its own daemon.
You can see this in the "start" method:
def start
loop do
self.class.lifecycle.run_callbacks(:loop, self) do
#realtime = Benchmark.realtime do
#result = work_off
end
end
If you want to start a rails instance for each new queue you could use monit and do something like:
check process delayed_job_0
with pidfile /var/www/apps/{app_name}/shared/pids/delayed_job.0.pid
start program = "/usr/bin/env RAILS_ENV=production /var/www/apps/{app_name}/current/bin/delayed_job start -i 0"
stop program = "/usr/bin/env RAILS_ENV=production /var/www/apps/{app_name}/current/bin/delayed_job stop -i 0"
group delayed_job
check process delayed_job_1
with pidfile /var/www/apps/{app_name}/shared/pids/delayed_job.1.pid
start program = "/usr/bin/env RAILS_ENV=production /var/www/apps/{app_name}/current/bin/delayed_job start -i 1"
stop program = "/usr/bin/env RAILS_ENV=production /var/www/apps/{app_name}/current/bin/delayed_job stop -i 1"
group delayed_job

Rails.root points to the wrong directory in production during a Resque job

I have two jobs that are queued simulataneously and one worker runs them in succession. Both jobs copy some files from the builds/ directory in the root of my Rails project and place them into a temporary folder.
The first job always succeeds, never have a problem - it doesn't matter which job runs first either. The first one will work.
The second one receives this error when trying to copy the files:
No such file or directory - /Users/apps/Sites/my-site/releases/20130829065128/builds/foo
That releases folder is two weeks old and should not still be on the server. It is empty, housing only a public/uploads directory and nothing else. I have killed all of my workers and restarted them multiple times, and have redeployed the Rails app multiple times. When I delete that releases directory, it makes it again.
I don't know what to do at this point. Why would this worker always create/look in this old releases directory? Why would only the second worker do this? I am getting the path by using:
Rails.root.join('builds') - Rails.root is apparently a 2 week old capistrano release? I should also mention this only happens in the production environment. What can I do
?
Rescue is not being restarted (stopped and started) on deployments which is causing old versions of the code to be run. Each worker continues to service the queue resulting in strange errors or behaviors.
Based on the path name it looks like you are using Capistrano for deploying.
Are you using the capistrano-resque gem? If not, you should give that a look.
I had exactly the same problem and here is how I solved it:
In my case the problem was how capistrano is handling the PID-files, which specify which workers currently exist. These files are normally stored in tmp/pids/. You need to tell capistrano NOT to store them in each release folder, but in shared/tmp/pids/. Otherwise resque does not know which workers are currently running, after you make a new deployment. It looks into the new release's pids-folder and finds no file. Therefore it assumes that no workers exist, which need to be shut down. Resque just creates new workers. And all the other workers still exist, but you cannot see them in the Resque-Dashboard. You can only see them, if you check the processes on the server.
Here is what you need to do:
Add the following lines in your deploy.rb (btw, I am using Capistrano 3.5)
append :linked_dirs, ".bundle", "tmp/pids"
set :resque_pid_path, -> { File.join(shared_path, 'tmp', 'pids') }
On the server, run htop in the terminal to start htop and then press T, to see all the processes which are currently running. It is easy to spot all those resque-worker-processes. You can also see the release-folder's name attached to them.
You need to kill all worker-processes by hand. Get out of htop and type the following command to kill all resque-processes (I like to have it completely clean):
sudo kill -9 `ps aux | grep [r]esque | grep -v grep | cut -c 10-16`
Now you can make a new deploy. You also need to start the resque-scheduler again.
I hope that helps.

delayed_job -i via cron script through ruby will not start after stopping previous processes

So I have a weird situation, I have delayed_job 2.0.7 and daemons 1.0.10 and ruby 1.87 & rails 2.3.5 running on Scientific Linux release 6.3 (Carbon).
I have a rake task that restarts delayed jobs each night and then does a bunch of batch processing. I used to just do ruby script/delayed_job stop and then start. I have added a backport of named queues that has allowed me to do named queues. So because of this I want to start several processes of each type of named queue. To do this, it seems the best way I found is to use -i to name each process differently so they don't collide.
I wrote some ruby code to do this looping and it works great in dev, it works great on the command line, it works great when called from the rails console. But when called from cron it fails silently, the call returns false but no error message.
# this works
system_call_result1 = %x{ruby script/delayed_job stop}
SnapUtils.log_to_both "result of stop all - #{system_call_result1} ***"
# this works
system_call_result2 = %x{mv log/delayed_job.log log/delayed_job.log.#{Date.today.to_s}}
SnapUtils.log_to_both "dj log file has been rotated"
# this fails, result is empty string, if I use system I get false returned
for x in 1..DELAYED_JOB_MAX_THREAD_COUNT
system_call_result = %x{ruby script/delayed_job -i default-#{x} start}
SnapUtils.log_to_both "result of start default queue iteration #{x} - #{system_call_result} ***"
end
# this fails the same way
for y in 1..FOLLOWERS_DELAYED_JOB_MAX_THREAD_COUNT
system_call_result = %x{ruby script/delayed_job --queue=follower_ids -i follower_ids-#{y} start}
SnapUtils.log_to_both "result of start followers queue iteration #{y} - #{system_call_result} ***"
end
So I did lots of trial and found that this problem only happens if I used -i - named processes and only happens if I stop them, then try to start them. If I remove the stops then everything works fine.
Again this is only when I use cron.
If I use command line or console to run, it works fine.
So my question is, what could cron be doing differently that causes these named dj processes not to start if you previously stopped them in the same ruby process?
thanks
Joel
Ok, I finally figured this out, when checking to see if cron would send email, we found that sendmail was broken, the version of mysql that sendmail wanted was not installed, so we fixed that and then our problem magically went away. I would still offer the bounty to anyone that can explain exactly why..

ruby on rails, fork, long running background process

I am initiating long running processes from a browser and showing results after it completes. I have defined in my controller :
def runthejob
pid = Process.fork
if pid.nil? then
#Child
output = execute_one_hour_job()
update_output_in_database(output)
# Exit here as this child process shouldn't continue anymore
exit
else
#Parent
Process.detach(pid)
#send response - job started...
end
The request in parent completes correctly. But , in child, there is always a "500 internal server error" . rails reports "Completed 500 Internal Server Error in 227192ms" . I am guessing this happens because the request response cycle of the child process is not completed as there is a "exit" in child. How do I fix this?
Is this the correct way to execute long running processes ? Is there any better way to do it?
When child is running, if I do "ps -e | grep rails" , I see that there are two instances of "rails server" . (I use the command "rails server" to start my rails server.)
ps -e | grep rails
75678 ttys002 /Users/xx/ruby-1.9.3-p194/bin/ruby script/rails server
75696 ttys002 /Users/xx/ruby-1.9.3-p194/bin/ruby script/rails server
Does this mean that there are two servers running ? How are the requests handled now? Wont the request go to the second server?
Thanks for helping me.
Try your code in production and see if the same error comes up. If not, your error may be from the development environment being cleared when the first request completes, whilst your forks still need the environment to exist. I haven't verified this with forks, but that is what happens with threads.
This line in your config/development.rb will retain the environment, but any code changes will require server restarts.
config.cache_classes = true
there are a lot of frameworks to do this in rails: DelayedJob, Resque, Sidekiq etc.
you can find a whole lot of examples on railscasts: http://railscasts.com/

Resources