I have one job that keeps running here is the result from Sidekiq::Cron::Job.all.to_yaml my problem is this code hasn't been used in months and i can't delete it for the life of me
I use the SideKiq web ui gem to delete all dead and retry versions of this and have tried using kill as well but nothing is working
- !ruby/object:Sidekiq::Cron::Job
fetch_missing_args: false
name: bundle_resolve_job
cron: 7,27,47 * * * *
description: ''
klass: Jobs::Order::Bundle::ResolvePayingJob
status: enabled
last_enqueue_time: 2022-07-26 09:27:42.000000000 Z
args: []
active_job: false
active_job_queue_name_prefix: ''
active_job_queue_name_delimiter: ''
message: '{"retry":true,"queue":"general","class":"Jobs::Order::Bundle::ResolvePayingJob","args":[]}'
queue: general
queue_name_with_prefix: general
and this is config/schedule.yml
global_parcel_resolve_job:
class: "Jobs::Global::Parcel::ResolvePayingJob"
cron: "7,27,47 * * * *"
queue: general
how can I permanently remove this job? that keeps using the old version of the code
Create an empty version of the job class. Let it "succeed".
Related
I'm trying to lock a part of my code using Redis, with Redlock library.
I've implemented it here:
def perform(task_id, task_type)
lock_manager = Redlock::Client.new([ "redis://127.0.0.1:6379" ])
lock_key = "task_runnuer_job_#{task_type}_#{task_id}"
puts("locking! #{task_id}")
lock_task = lock_manager.lock(lock_key , 6 * 60 * 1000)
if lock_task.present?
begin
# Exec task...
Services::ExecTasks::Run.call task.id
ensure
puts("unlocking! #{task_id}")
lock_manager.unlock(lock_task)
end
else
puts("Resource is locked! #{lock_key}")
end
end
What I get when running multiple Sidekiq jobs at the same time, is the following logs:
"locking! 520"
"locking! 520"
"unlocking! 520"
"unlocking! 520"
This happens when both of my 520 task, which should not be executed together, are being called with 1ms diffrence.
Yet sometimes the lock works as expected.
I've checked Redis and it works just fine.
Any ideas? Thanks!
I have a Jenkins parallel build issue. I have a large configuration options blob, and I want to call the same build job repeatedly with changes to the config blob. I also want to delay a little bit between each run. Also, the number of calls is based off selections so could be anywhere from 1 job to say 7 jobs at once hence building it grammatically. Given a much paired down version of the code below assuming 'opts' are what is coming in from selection, could someone help me accomplish this. The long job has a timestamp inside it and so we don't want them all kicking off at the exact same moment.
Without the 'sleep' I will see "Starting building: myLongJob#100", then 101, then 102 etc.. However there is no delay between the jobs. With the sleep I see "Starting building: myLongJob #100" then 100, then 100 again.
Other than the given code, I tried adding quietPeriod: diffnum+1 but that either did nothing or waited until the long job finished. I also tried adding wait: true (and wait: false). Still I see no delay or the same job build number repeatedly.
cfgOtps = """
server: MYOPTION
prefix: Testing
"""
def opts = ["Windows", "Linux", "Mac"]
// function will create a list of cfg blobs with server replaced by 'opt' so in this example
// cfgs will be length of 3 with server beign Windows, Linux, and Mac respectively
def cfgs = generate_cfg_list(cfgOtps, opts)
//update so each entry in branches has a different key
def branchnum = 0
def branches = [:]
//variable to have different inside the closure
def diffnum = -1
def runco = ""
cfgs.each {
branchnum += 1
branches["branch${branchnum}"] = {
diffnum += 1
//sleep(diffnum+5)
def deployResult = build job: 'myLongJob', parameters: [[$class: 'TextParameterValue', name: 'ConfigObj', value: cfgs[diffnum]],
string(name:'dummy', value: "${diffnum}")]
}
}
parallel branches
I expected the output to be something like the following with a short delay between them.
Starting building: myLongJob #100
Starting building: myLongJob #101
Starting building: myLongJob #102
Which I do get if I do not have the sleep. The issue there is the long job needs to not be run at the same time so sometimes overwrites things.
Adding the sleep results in the following
Starting building: myLongJob #100
Starting building: myLongJob #100
Starting building: myLongJob #100
or maybe 2 with the same number and one a different build number. Not sure why a sleep would induce that behavior?
I'm using sidekiq cron to run some jobs. I have a parent job which only runs once, and that parent job starts 7 million child jobs. However, in my sidekiq dashboard, it says over 42 million jobs enqueued. I checked those enqueued jobs, they are my child jobs. I'm trying to figure out why so many more jobs than expected are enqueued. I checked the log in sidekiq, one thing I noticed is, "Cron Jobs - add job with name: new_topic_post_job" shows up many times in the log. new_topic_post is the name of the parent job in schedule.yml. Following lines also show up many times
2019-04-18T17:01:22.558Z 12605 TID-osb3infd0 WARN: Processing recovered job from queue queue:low (queue:low_i-03933b94d1503fec0.nodemodo.com_4): "{\"retry\":false,\"queue\":\"low\",\"backtrace\":true,\"class\":\"WeeklyNewTopicPostCron\",\"args\":[],\"jid\":\"f37382211fcbd4b335ce6c85\",\"created_at\":1555606809.2025042,\"locale\":\"en\",\"enqueued_at\":1555606809.202564}"
2019-04-18T17:01:22.559Z 12605 TID-osb2wh8to WeeklyNewTopicPostCron JID-f37382211fcbd4b335ce6c85 INFO: start
WeeklyNewTopicPostCron is the name of the parent job class. Wondering does this mean my parent job runs multiple times instead of only 1? If so, what's the cause? I'm pretty sure the time in the cron job is right, I set it to "0 17 * * 4" which means it only runs once a week. Also I set retry to false for parent job and 3 for child jobs. So even all child jobs fail, we should still only have 21 million jobs. Following is my cron job setting in schedule.yml
new_topic_post_job:
cron: "0 17 * * 4"
class: "WeeklyNewTopicPostCron"
queue: low
and this is WeeklyNewTopicPostCron:
class WeeklyNewTopicPostCron
include Sidekiq::Worker
sidekiq_options queue: :low, retry: false, backtrace: true
def perform
processed_user_ids = Set.new
TopicFollower.select("id, user_id").find_in_batches(batch_size: 1000000) do |topic_followers|
new_user_ids = []
topic_followers.map(&:user_id).each { |user_id| new_user_ids << user_id if processed_user_ids.add?(user_id) }
batch_size = 1000
offset = 0
loop do
batched_user_ids_for_redis = new_user_ids[offset, batch_size]
Sidekiq::Client.push_bulk('class' => NewTopicPostSender,
'args' => batched_user_ids_for_redis.map { |user_id| [user_id, 7] }) if batched_user_ids_for_redis.present?
break if batched_user_ids_for_redis.size < batch_size
offset += batch_size
end
end
end
end
Most probably your parent sidekiq job is causing the sidekiq process to crash, which then results in a worker restart. On restart sidekiq probably tries to recover the interrupted job and starts processing it again (from the beginning). Some details here:
https://github.com/mperham/sidekiq/wiki/Reliability#recovering-jobs
This probably happens multiple times before the parent job eventually finishes, and hence the extremely high number of child jobs are created. You can easily verify this by checking the process id of the sidekiq process while this job is being run and it most probably will keep changing after a while:
ps aux | grep sidekiq
It could be that you have some monit configuration to restart sidekiq in case memory usage goes too high.Or it might be that this query is causing the process to crash:
TopicFollower.select("id, user_id").find_in_batches(batch_size: 1000000)
Try reducing the batch_size. 1million feels like too high a number. But my best guess is that the sidekiq process dies while processing the long running parent process.
I'm using Jenkins 2.46.2 version. I have installed Parameterized Scheduler plugin to trigger the build periodically with different parameter.
But it fails to trigger the build at scheduled time.
You have a problem with the ";" on parameters.
you have to insert space after each parameter, for example:
H/2 * * * * % EndMarket=can ;GitBranch=DevelopmantBranch
try without spaces between params, like:
0 8 * * * % base_url=https://www.example.com;suite_type=Regression
I installed the plugin on Jenkins 2.67, and it works. I had the same problem with builds failing to trigger with earlier version of Jenkins.
in my case, my parameter is Choice Parameter, so if I set the scheduler below and 'valueabc' is not in the list of choice, it will fail to start
H/15 * * * * %name=valueabc
I'm using Rufus-scheduler gem in my ROR application to send emails in the background. My setup is like:
# config/initializers/rufus_scheduler.rb
scheduler = Rufus::Scheduler.new(lockfile: '.rufus-scheduler.lock')
scheduler.cron '0 2 * * fri' do
UserMailer.send_some_emails
end
Any changes I make in the .send_some_email class method isn't reflected in the Rufus-scheduler task, how can I fix this? I don't want to restart the server every time I make a change!
Let's assume UserMailer.send_some_emails is defined in whatever/user_mailer.rb
scheduler = Rufus::Scheduler.new(:lockfile => '.rufus-scheduler.lock')
scheduler.cron '0 2 * * fri' do
load 'whatever/user_mailer.rb'
UserMailer.send_some_emails
end