How I can get all the active jobs scheduled in the Quartz.NET scheduler? I tried the GetCurrentlyExecutingJobs() but it is returning always 0.
That method doesn't seem to work.
The only solution I had found was to loop through all the jobs:
var groups = sched.JobGroupNames;
for (int i = 0; i < groups.Length; i++)
{
string[] names = sched.GetJobNames(groups[i]);
for (int j = 0; j < names.Length; j++)
{
var currentJob = sched.GetJobDetail(names[j], groups[i]);
}
}
When a job is found it means that it is still active.
If you set your job as durable, though, it will never be deleted if there are no associated trigger.
In that situation this code works better:
var groups = sched.JobGroupNames;
for (int i = 0; i < groups.Length; i++)
{
string[] names = sched.GetJobNames(groups[i]);
for (int j = 0; j < names.Length; j++)
{
var currentJob = sched.GetJobDetail(names[j], groups[i]);
if (sched.GetTriggersOfJob(names[j], groups[i]).Count() > 0)
{
// still scheduled.
}
}
}
UPDATE:
I did some debugging to see what happens with GetCurrentlyExecutingJobs().
As a matter of fact it returns the job being executed but the elements are remove from the collection as soon as the job is executed.
You can check the 2 functions JobToBeExecuted and JobWasExecuted in the QuartzScheduler class.
A simpler loop option would be to get all of the job keys and iterate over them. This implementation is for a minimal API example. It gets all JobKeys from the scheduler and then iterates over each on to get the details and execution schedule. More details available in this sample repo: QuartzScheduler. If a job doesn't have a schedule, or it's scheduled execution has completed and there are no future executions planned then the job will not be included in the list of returned jobs.
app.MapGet("/schedules", async (ISchedulerFactory sf) =>
{
var scheduler = await sf.GetScheduler();
var definedJobDetails = new List<JobDetailsDto>();
var jobKeys = await scheduler.GetJobKeys(GroupMatcher<JobKey>.AnyGroup());
foreach (var jobKey in jobKeys)
{
var jobDetail = await scheduler.GetJobDetail(jobKey);
var jobSchedule = await scheduler.GetTriggersOfJob(jobKey);
if (jobDetail != null && jobSchedule != null)
{
definedJobDetails.Add(new JobDetailsDto(
jobDetail.Key.Name,
jobDetail.Key.Group,
jobDetail.Description,
jobSchedule.First().GetPreviousFireTimeUtc(),
jobSchedule.First().GetNextFireTimeUtc())
);
}
}
return definedJobDetails;
})
Related
In my scripted pipeline, I want to get changes since last successful build and based on files which have changed I want to enable or disable some parts of the pipeline. I am using Global Shared Library which contains definitions of some additional steps and the whole pipeline. To print changes since last successful build I am using the following code:
def showChanges(def build) {
if ((build != null) && (build.result != 'SUCCESS')) {
def changeLogSets = build.rawBuild.changeSets
for (int i = 0; i < changeLogSets.size(); i++) {
def entries = changeLogSets[i].items
for (int j = 0; j < entries.length; j++) {
def entry = entries[j]
echo "${entry.commitId} by ${entry.author} on ${new Date(entry.timestamp)}: ${entry.msg}"
def files = new ArrayList(entry.affectedFiles)
for (int k = 0; k < files.size(); k++) {
def file = files[k]
echo " ${file.editType.name} ${file.path}"
}
}
}
showChanges(build.getPreviousBuild())
}
}
However, when I do some change in global library then it prints just this change and not the change which happened on the main repository. The changeSet contains no info regarding files which have changed in the main cloned repository.
This is because Jenkins loads all changes from all repositories and shared libraries referenced in your Pipeline into rawBuild.changeSets. There's nothing you can really do about this except manually filter out repositories. For instance, if you only want changes that come from the my_awesome_repo repository:
changeSets = rawBuild.changeSets.findAll { changeSet->
try {
changeSet.getBrowser().getRepoUrl() =~ /my_awesome_repo/
} catch(groovy.lang.MissingMethodException e) {
false // repository has no `browser` property
}
}
I want to do 3 different api call from my zapier code, get their returns in variables and merge them. I can't figure out how to do that. It will be like:
var urls = [apiUrl1, apiUrl2, apiUrl3];
var output = [];
for ( i = 0; i < urls.length; i++ ) {
output[i] = fetch( urls[i] );
}
This is an example code. I can't get response to output, it gets only a blank object {}. What will be the procedure to save the fetch return values in the output array?
Since apparently the folks at Zapier do not like to give out working examples or any sort of decent documentation for this level of code intricacy... here is a working example:
var promises = [];
for (var i = urls.length - 1; i >= 0; i--) {
promises.push(fetch(urls[i]));
}
Promise.all(promises).then(function(res){
var blobPromises = [];
for (var i = res.length - 1; i >= 0; i--) {
blobPromises.push(res[i].text());
}
return Promise.all(blobPromises);
}).then(function(body){
var output = {id: 1234, rawData: body};
callback(null, output);
}).catch(callback);
This may not be the cleanest solution, but it works for me. Cheers!
Two things you'll need to brush up on:
Promises - especially Promise.all() - there is lots out there about that.
Callback to return the data asynchronously. Our help docs describe this.
The main reason your code fails is because you are assuming the fetch happens immediately. In JavaScript that is not the case - it happens Async and you have to use promises and callbacks to wait until they are done before returning the output via the callback!
I'm new to groovy and the workflow plugin, so perhaps this is something obvious. Q1: I'm try to run jobs read under a view in parallel. I do like this:
jenkins = Hudson.instance
parallel getBranches()
#NonCPS def getBranches() {
def jobBranches = [:]
for (int i = 0; i < getJobs().size(); i++) {
jobBranches["branch_${i}"] = {
build job : getJobs()[i]
}
}
return jobBranches
}
#NonCPS def getJobs() {
def jobArray = []
jenkins.instance.getView("view_A").items.each{jobArray.add(it.displayName)}
return jobArray
}
I got:
But if I wrote it like this:
jenkins = Hudson.instance
def jobBranches = [:]
for (int i = 0; i < getJobs().size(); i++) {
jobBranches["branch_${i}"] = {
build job : getJobs()[i]
}
}
parallel jobBranches
#NonCPS def getJobs() {
def jobArray = []
jenkins.instance.getView("view_A").items.each{jobArray.add(it.displayName)}
return jobArray
}
Then I got something like this:
What am I doing wrong? Or Is there another way to accomplish the same thing.
Q2: BTW, If there are three jobs, like j1, j2, j3. j1 and j2 are executed first and in parallel, when one of them are finished, j3 will be executed. so how to do this?
I figured out why.
for (int i = 0; i < getJobs().size(); i++) {
def j=i
jobBranches["branch_${i}"] = {
build job : getJobs()[j]
}
Then it will work!
I need to replace multiple value in JSONStore of IBM Worklight.
In this way is saved only first value. Why?
.then(function() {
for (var index = 0; index < elencoSpese.length; index++) {
var spesa = elencoSpese[index];
var spesaReplace = {_id: spesa.id, json: spesa};
spesa.id_nota_spesa = idNotaSpesa;
spesa.checked = true;
WL.JSONStore.get(COLLECTION_NAME_SPESE).replace(spesaReplace);
}
})
You want to build an array of JSONStore documents and pass it to the replaceAPI. For example:
.then(function() {
var replacementsArray = [];
for (var index = 0; index < elencoSpese.length; index++) {
var spesa = elencoSpese[index];
var spesaReplace = {_id: spesa.id, json: spesa};
spesa.id_nota_spesa = idNotaSpesa;
spesa.checked = true;
replacementsArray.push(spesaReplace);
}
return WL.JSONStore.get(COLLECTION_NAME_SPESE).replace(replacementsArray);
})
.then(function (numOfDocsReplaced) {
// numOfDocsReplaced should equal elencoSpese.length
})
I assume this happens in the JavaScript implementation of the JSONStore API, if that's the case the answer is in the documentation here. The JavaScript implementation of JSONStore expects code to be called serially. Wait for an operation to finish before you call the next one. When you call the replace multiple times without waiting, you're calling the API in parallel instead of serially. This should not be an issue in the production environments (i.e. Android, iOS, WP8 and W8).
How I can get all the tasks that has been scheduled to the Quartz scheduler to display in a web page?
Should be something like this:
string[] groups = myScheduler.JobGroupNames;
for (int i = 0; i < groups.Length; i++)
{
string[] names = myScheduler.GetJobNames(groups[i]);
for (int j = 0; j < names.Length; j++)
{
// groups[i]
// names[j]
}
}