Serial-Parallel Execution of multiple AsyncTasks in Quartz.net - quartz.net

I can do this with Task class in C# 4.0 but I want to give a chance to Quartz.net framework.
Task A (Runs every Sunday)
Task A.1
Task A.11 .. Task A.1n (Runs after Task A.1 is completed, n is set after Task A.1)
There is 3 exceptions:
RetryCurrentSubTaskException
CancelCurrentSubTaskException ( Main Task still can be completed)
CancelAllTasksException (Critic exception)
How can I achieve this with quartz.net or any other lightweight scheduling framework?
(persistance layer is Sql Server)

Related

How can I retrieve the execution status of parallel triggered child jobs to a pipeline script

have a pipeline script that executes child jobs in parallel.
Say I have 5 data (a,b,c,d,e) that has to be executed on 3 jobs (J1, J2, J3)
My pipeline script is in the below format
for (int i = 0; i < size; i++) { def index = i branches["branch${i}"] = { build job: 'SampleJob', parameters: [ string(name: 'param1', value:'${data}'), string(name:'dummy', value: "${index}")] } } parallel branches
My problem is, say the execution is happening on Job 1 with the data 1,2,3,4,5 and if the data 3 execution is failed on Job 1 then the data 3 execution should be stopped there itself and should not happen on the subsequent parallel execution on Jobs 2 and 3.
Is there any way that I can read the execution status of parallelly execution job status on the Pipeline script so that I can restrict data 3 execution to block in Jobs 2 and 3.
I am quite blocked here for a long time. Hoping for a solution from my community. Thanks a lot in advance.
In summary, it sounds like you want to
run multiple jobs in parallel against different pieces of data. I will call the set of related jobs the "batch".
avoid starting a queued job if any of the jobs in the batch have failed
automatically abort a running job if any of the jobs in the batch have failed
The jobs need some way to communicate their failure to the others. Use a shared storage location to store the "failure flag". If the file exists, then one or more of the jobs have failed.
For example, a shared NFS path: /shared/jenkins/jobstate/<BATCH_ID>/failed
At the start of the job, check for the existence of this path. Exit if it does. The file doesn't necessarily need to contain any data - its presence is enough.
Since you need running jobs to abort early if the failure flag exists, you will need to poll that location periodically. For example, after each unit of work. Again, if the file exists then exit early.
If you don't use NFS, that's ok. You could also use an object storage bucket. The important thing is that the state is accessible to all the relevant build jobs.

Spring Cloud Dataflow Task Execution Fails on subsequent runs

Name: spring-cloud-dataflow-server
Version: 2.5.0.BUILD-SNAPSHOT
I have a very simple task created. First run it always COMPLETES fine with NO ISSUES. If task is run again it FAILS with following error.
Subsequent Launch of same task fails with below exception and it's a fresh run after the previous execution completed fully. If a task is run one time can't it be run again?
(log from Task Execution Details - Execution ID: 246)
Caused by: org.springframework.batch.core.repository.JobInstanceAlreadyCompleteException: A job instance already exists and is complete for parameters={-spring.cloud.data.flow.taskappname=composed-task-runner, -spring.cloud.task.executionid=246, -graph=threetasks-t1 && threetasks-t2 && threetasks-t3, -spring.datasource.username=root, -spring.cloud.data.flow.platformname=default, -dataflow-server-uri=http://10.104.227.49:9393, -management.metrics.export.prometheus.enabled=true, -management.metrics.export.prometheus.rsocket.host=prometheus-proxy, -spring.datasource.url=jdbc:mysql://10.110.89.91:3306/mysql, -spring.datasource.driverClassName=org.mariadb.jdbc.Driver, -spring.datasource.password=manager, -management.metrics.export.prometheus.rsocket.port=7001, -management.metrics.export.prometheus.rsocket.enabled=true, -spring.cloud.task.name=threetasks}. If you want to run this job again, change the parameters.
A Job instance in a Spring Batch application requires a unique Job Parameter and this is by design.
In this case, since you are using the Composed Task, you can use the property --increment-instance-enabled=true as part of the composed task definition to handle it. This property will make sure to have the Job Instance get the unique Job parameters.
You can check the list of properties supported for Composed Task Runner here

How to spawn an offline process from a TFS vNext build step that would last beyond the build?

It seems that if my build step spawns a child process, that process cannot survive the end of the build - it is killed.
But I have a scenario where a child process is triggered in order to complete offline certain operations that the build should not wait for their completion (reporting metrics to Azure AppInsights).
This procedure worked fine in XAML builds, but now that we migrated to vNext it is broken, because the child process is killed when the build ends.
What can be done about it?
The easiest way is to schedule a task using the task scheduler.
Example using Microsoft.Win32.TaskScheduler NuGet package:
using (var ts = new TaskService())
{
// Create a new task definition and assign properties
var td = ts.NewTask();
td.Triggers.Add(new TimeTrigger(DateTime.Now.AddDays(-1)));
td.Actions.Add(new ExecAction(MyExe, MyArgs));
ts.RootFolder.RegisterTaskDefinition(MyTaskName, td).Run();
ts.RootFolder.DeleteTask(MyTaskName);
}

How do I stop a running task in Dask?

When using Dask's distributed scheduler I have a task that is running on a remote worker that I want to stop.
How do I stop it? I know about the cancel method, but this doesn't seem to work if the task has already started executing.
If it's not yet running
If the task has not yet started running you can cancel it by cancelling the associated future
future = client.submit(func, *args) # start task
future.cancel() # cancel task
If you are using dask collections then you can use the client.cancel method
x = x.persist() # start many tasks
client.cancel(x) # cancel all tasks
If it is running
However if your task has already started running on a thread within a worker then there is nothing that you can do to interrupt that thread. Unfortunately this is a limitation of Python.
Build in an explicit stopping condition
The best you can do is to build in some sort of stopping criterion into your function with your own custom logic. You might consider checking a shared variable within a loop. Look for "Variable" in these docs: http://dask.pydata.org/en/latest/futures.html
from dask.distributed import Client, Variable
client = Client()
stop = Varible()
stop.put(False)
def long_running_task():
while not stop.get():
... do stuff
future = client.submit(long_running_task)
... wait a while
stop.put(True)

How to get the result of a celery periodic running task?

when the periodic task is running , how to get the task return? I need the running result.
this is my problem:
For example, my periodic task:
#shared_task(name='add')
def add():
x=1,y=2
return x+y
I add the task as periodic task from django admin,then start the worker with -B DEBUG option.It runs well.But I want to get the return value.Is there any method to get the retult when the periodic task is running?
To get result of a task you can call .get() method on AsyncResult object, which is returned when you add task to a queue:
result = add.delay()
result.get() // returns 5
Also make sure that your results are stored for enough long time by setting up CELERY_RESULT_PERSISTENT or CELERY_TASK_RESULT_EXPIRES. Read more in AMQP backend settings section.

Resources