I want to know the job status for specific job and based on specific job status I need to do something. I don't know how to do it. I just tried by this below code. It does not do nothing.
EXEC msdb.dbo.sp_help_job
#job_name = 'CreateIOPlan'
#execution_status = 1
IF #job_name = 'CreateIOPlan' AND #execution_status = 1
BEGIN
EXEC msdb.dbo.sp_stop_job N'CreateIOPlan'
END
Related
Would you please tell me how to assign a value to the variable on the Task? Here's the detailed scenario of what I'm trying to accomplish and what the error is.
To get the latest query_id of the below SQL statement, I am creating the following function.
CREATE OR REPLACE FUNCTION rejected_records_queryid()
RETURNS varchar
AS 'select query_id from snowflake.account_usage.query_history where QUERY_TEXT LIKE \'select 1, 2%\' AND EXECUTION_STATUS=\'SUCCESS\' order by START_TIME desc limit 1';
If I run this manually, the function works perfectly,
set qid = rejected_records_queryid();
Session variable value:
select $qid;
Please refer attached screenshot for the output.
Pic1
However, if I use the Task, the return values of the function will not be assigned to the session variable,
CREATE OR REPLACE TASK rejected_records_queryid1
WAREHOUSE = COMPUTE_WH
SCHEDULE = '1 MINUTE'
AS
set qid2 = rejected_records_queryid();
Change the task mode to resume,
ALTER TASK rejected_records_queryid1 RESUME;
The following error message appears after I check the value of the qid2 after Task run,
SQL compilation error: error line 1 at position 7 Session variable '$QID2' does not exist
Attached screenshot for the error message In order to use it in the below sql, I need the qid2 value to be assigned from the Task,
select * from table(result_scan($qid2))
Pic2
I would appreciate any help on how to assign the value to the variable in TASK or any other workarounds.
I tried my best to search and make sure this is not duplicate questions. but didn't get any. so writing it now.
We have Daily Scheduled jobs which runs in night time for our application. so we have our support team who actually make sure all jobs went successfully every night. if not raise a ticket/concern.
Now, Support team checks all this Jobs by going to Jenkins.
We are now looking for option to make this monitoring automated by sending in mail to all stakeholders.
Being said that I have 10 jobs which I want all those jobs listed in a table with Status of execution of last night and send to group of people.
How can we achieve this? is there any Jenkins Plugin which can help us?
Thanks in Advance.
Unfortunately, I was not allowed to use API in our Jenkins due to some security constraints.
so had to go with below solution.
(have added only stage which we need for getting job status. feel free to edit/update as you need)
stage('Get status of Jobs') {
steps {
script {
env.mailText=""
def mailText = "============================================= \n"
//Add your all jobs name in a list.
allDailyJobs = ['Prod/Daily-Jobs/dbBackups', 'Prod/Daily-Jobs/productImport']
allDailyJobs.each() { jobName ->
//Remove the path prefix to get only Job name (to mention in mail report)
def regex = ~"^Prod/Daily-Jobs/"
String just_job_name = jobName - regex
// Get the Job Latest Details
def job_number = getBuildNumber(jobName)
def job_result = getBuildStatus(jobName)
//echo is paragraph format with lines sepearators
mailText = mailText + " Daily Job Name => ${just_job_name} \n"
mailText = mailText + " Build number => ${job_number} \n"
mailText = mailText + " Build Status => ${job_result} \n"
mailText = mailText + " ============================================= \n\n"
}
env.mailText = mailText
}
}
}
// Get the Last Build Numnber for the job.
#NonCPS
def getBuildNumber(String jobName) {
def job = jenkins.model.Jenkins.instance.getItemByFullName(jobName)
return job.getLastBuild().getNumber()
}
// Get the Status of the Job
#NonCPS
def getBuildStatus(String jobName) {
def job = jenkins.model.Jenkins.instance.getItemByFullName(jobName)
return job.getLastBuild().getResult().toString()
}
This stage gives "env.mailText" value which has all jobs details you need. we can use this one to send in mail or chats or to any Hooks.
Important point : this would require script approval in Jenkins to execute the task. so please make sure you have right access for it.
(to avoid this happening again and again, use it without sandbox checkout)
this is how it looks in mail.
I have a sidekiq worker that waits for a change to happen to a record made by a remote client. Something like the following:
#myworker async process to wait for client to confirm status
perform(myRecordID)
sendClient(myRecordID)
didClientAcknowledge = false
while !didClientAcknowledge
didClientAcknowledge = myRecords.find(myRecordID).status == :ACK_OK
if didClientAcknowledge
break
end
# wait for client to perform an update on the record to confirm status
sleep 5.seconds
end
Rails.logger.info("client got the message")
end
my problem is that although I can see that the client has in fact performed the acknowledgement and updated the record with correct status update (ACK_OK), my sidekiq thread continues to see the old status for myRecord.
I'm sure my logic is flawed here but it seems like the sidekiq process does not "see" changes to the DB...but if I used my rails console I can see that the client has in fact updated the DB as expected...
Thanks!
Edit 1
ok so here's a thought, instead of the loop, I'll schedule another call to the worker within 5 seconds... so here's the updated code:
perform(myRecordID, retry_count)
retry_count -= 1
if retry_count < 1
return
end
sendClient(myRecordID)
didClientAcknowledge = false
if !didClientAcknowledge
didClientAcknowledge = myRecords.find(myRecordID).status == :ACK_OK
if didClientAcknowledge
Rails.logger.info("client got the message")
return
end
# wait for client to perform an update on the record to confirm status
myWorker.perform_in(5.seconds)
end
Rails.logger.info("client got the message")
end
This seems to work, but will test a bit more..one challenge is having a retry count which means I need to maintain some sort of variable between calls to the worker...
edit2 possibly this can be done by passing in the time to the first call and then checking if a timeout has been surpassed before invoking the next instance...assuming time does not stand still as well inside the async call...
edit3 Adding the retry_count argument allows us to control how many times this worker will be spawned...
my $TransactionPreviousStatus = $self->TicketObj->Status->OldValue:
I am thinking this should give the old status but I end up getting the current status
For Ex:
Old status: open
Current Status: reply-pls
So when somebody will reply on the ticket, a custom script will execute which should change the status to old value (i.e., open) but again it goes back to reply-pls.
You can not call OldValue on TicketObj, it's a Transaction method. So if I understand your needs correctly, you need to write a scrip which triggers on StatusChange && Correspondence which sets the status back. This is little bit tricky.
AFAIK you need to create a batch scrip which triggers on Correspondence and then find the last transaction of StatusChange and revert it back. Something like this could work:
Description: On correspond don't change the status
Condition: On Correspond
Action: User defined
Template: Blank
Stage: Batch
Custom action commit code:
my $transactions = $self->TicketObj->Transactions;
my $last_status;
while (my $transaction = $transactions->Next) {
if ($transaction->Type eq "Status" ) {
$last_status = $transaction;
}
}
$self->TicketObj->SetStatus($last_status->OldValue);
I have trying to save a new record with delayed job. The code in question is below:
#method being called:
ibo.add_to_database(params[:url])
#method definition
def add_to_database(url)
feed = Feeds.new do |f|
f.url = url
f.title = self.feed_title if self.feed_title
f.link = self.site_link if self.site_link
f.image = self.feed_image if self.feed_image
end
feed.save!
end
handle_asynchronously :add_to_database
I get absolutely no errors, and the job is removed from the database as it should be. Except there is no change to the Feeds model. Anyone have any ideas what gives?
delayed_job runs as a daemon thread, so the first thing to do would be to check whether that it is running:
ps ax | grep delayed_job
the next thing I would check the log of actual delayed job, it would probably have you error description:
less log/delayed_job.log
Other then that, your code sniplet looks fine.