how to start/stop all tasks in Spring Cloud DataFlow - spring-cloud-dataflow

Does Spring Cloud Data Flow support start/stop all tasks features? (start /stop at the same time) and how to do that?
Can I do that by updating from database?

You can stop a task execution by invoking a stop on a specific task execution ID. You can refer more to this in the documentation here.
If you are running a Spring Batch application (which is a task app itself), then it can be restarted after being stopped.

Related

Is it possible to temporary disable ECS Service to start new tasks once some tasks are stopped?

I have Ruby on Rails application running on ECS. Each Service has it's own desired count.
I'm using ECS Ruby SDK and Resque/Redis.
Every task should take a job from queue (Resque/Redis) and once it's finished it should take another job.
But there are situations in which there are no jobs in the queue. In that case i want to decrease the number of running tasks. There is a way to do this from ECS Ruby SDK:
tasks_to_stop.each do |task|
ecs.stop_task(task[0]['id'])
ecs.update(ecs.current_desired_count - 1)
resque.remove_worker(task[0]['ip'])
end
In this case i loop through each task (specific task id from ECS that is not running any job) and i kill that task and after that i decrease desired_count by 1.
In most cases this will work but i can't be sure about that since some times there is a delay when i call stop_task method. It would be great if there was a way to tell ECS Service to temporary disable starting new tasks even if the number of running tasks is not the same as desired count.
Is there a way to do this from ECS GUI, AWS CLI or ECS Ruby SDK?

Dask restart worker(s) using client

Is there a way using dask client to restart a worker or worker list provided. Needed a way to bounce a worker after a task is executed to reset the state of the process which may have been changed by the execution.
Client.restart() restarts entire grid and so may end up killing any tasks running in parallel to one that just completed.

Need help regarding designing the architecture of application and deployment strategy using Dockers

Let me first explain the application that I will develope.
We have following set of workflows to be developed in Camunda:
Global Subprocess Workflow like fetchImageAttributes,
fetchFileAttributes, etc...
FileTransfer Workflow.
FileConverter Workflow.
FileTransfer workflow uses global subprocesses with the help of call activity task in camunda, similarly FileConverter workflow also uses the subprocesses with the help of call activity task.
Global process is long running process hence whenever any subprocess starts it sends a message in specific rabbit queue and waits for the response in specific rabbit queue to resume subprocess using receive task.
FileTransfer workflow & FileConverter Workflow can be invoked independently. We have created a rabbit queue listner in springs that will listen to specific queue for respective workflows, and whenever a message is dropped in those queues the workflow will get invoked.
During the development process all the three workflow will be deployed and tested in single tomcat instance hence the workflow will be working with no concerns.
Now the plan will be to host them to cloud using dockers, the plan is to host these three workflows in 3 docker containers.
Container 1 will contain Global Subprocess Workflow.
Container 2 will contain FileTransfer Workflow.
Container 3 will contain FileConverter Workflow.
All the three camunda workflow will be using same database to store specific workflow activities and variables.
Challenges faced:
Since FileTransfer Workflow & FileConverter Workflow both uses Global Subprocess using call activity will fail as they are not available in same runtime engine. Should we use Camunda Rest services?
To overcome the above challenge I thought of
Deployment plan 2:
Container 1 will contain Global Subprocess Workflow & FileTransfer Workflow.
Container 2 will contain Global Subprocess Workflow & FileConverter Workflow.
Challenges faced:
Since Global Subprocess Workflow are present in both the containers their may be scenarios where the response for FileTransfer Workflow may get pulled by the FileConverter Workflow since Global Subprocess are listening to same rabbit queue in both the containers, and hence it can lead to error where the process instance will not be found.
So if anyone can help me with a better architecture or if any one who has good experience in camunda and its deployment in Heterogeneous Clusters can guide me.
Thanks.
You could reason, how the communication between the processes is implemented. Since you separate the deployment, a sub process/call activity is not an option. A better approach is to use BPMN messages and create a choreography between the processes. If you are already using rabbit, you can develop a BPMN-message to rabbit adapter and pass messages around.
There are two other approaches for connecting systems:
using external service tasks
using a new approach called zeebe

Marathon - do not redeploy app when return code = 0?

We have a spring boot application deployed in a docker container and managed using mesosphere (marathon + mesos). The spring boot app is intended to be deployed via marathon, and once complete, it will exit with code = 0.
Currently, every time the boot application terminates, marathon redeploys the app again, which I wish to disable. Is there a setting that I can set in the application's marathon json config file that will prevent marathon from redeploying an app if it does not exit with a non-zero code?
If you just want to run one-time jobs, I think Chronos would be the right tool. Marathon is, as Michael wrote, for long-running tasks.
I think there's a principled problem in the understanding of what Marathon does: it is meant for long-running tasks (or put in other words: there's a while loop somewhere in there, maybe an implicit one). If your app exists, Marathon sees this and assumes it has failed and re-starts it again.

How to automatically stop a cron-like service(whenever / clockwork) with Rails server shutdown?

I implemented "whenever" service in my Rails app to execute an operation periodically.
The operation is expected to be executed only while the Rails app is running, and automatically stopped when the app is shutdown.
I succeeded to automatically start the cron service by calling it in a initializer script, but I can't guess how to stop the service execpt call a shell script manually. How can I do that?
I don't care the kind of cron-like service(whenever, clockwork etc).
They're two opposing services, so you will probably have to handle them differently.
Whenever outputs the cron commands to a crontab. To stop the commands from running, you remove the crontab. You can shell out to the system on exit and issue the
whenever clear
command to clear the job schedules.
Clockwork is a cron replacement and it has its own daemon. Because it's daemonized, you should be able to stop the daemon on exit.
If you want to generalize this, you might have to write two classes describing behaviour for either and then write a superclass that invokes one or the other or both (and handles errors coming from a command not existing).

Resources